macOS, Visual Studio Code, Python 3.7.5, OpenCV4

It took a few attempts to get a compatible Python and OpenCV library running under Visual Studio Code on macOS Catalina using a virtual environment. I made a video to show how I got this going – this post just adds some more details.

There is also an excellent tutorial from Microsoft:

Getting Started with Python in VS Code

Note: virtual machine rendering problem

Visual Studio Code running on a virtual machine may have problems rendering the interface. This seems to be related to the underlying Electron framework and GPU acceleration. I made a quick video to show how I got around this:

Fix rendering problems for Visual Studio Code running on a virtual machine

Install Python 3.7.5

A virgin Mac comes with Python 2.7 installed – this is not recommended and V3.7.5 works with OpenCV4 on a Mac. V3.8 does not work at the time of writing (although since I started writing this post it looks like it now does). Download the installer from the main python website by selecting Downloads, Mac OS X, and then selecting the 64-bit installer:

Run the installer – I used all default settings.

Install Visual Studio Code

Download the installer from Visual Studio Code and immediately move the downloaded file to the Applications folder. (This is the actual application, not an installer). Try to run once – macOS will refuse due to security:

Close the message, open System Preferences, and select the Security and Privacy settings. Then select “Open Anyway” to allow VSC.

Visual Studio Code should now start:

Configure Python

Open a folder by selecting Open folder and then add a new file. Save the file using the .py extension:

Visual Studio Code immediately offers to install the Python extension, select Install:

On a virgin Mac there will now be a prompt to install command line developer tools, so click Install if prompted and allow the installation to complete before returning to Visual Studio Code.

The status bar will show the selected interpret if everything has gone well:

Install the linter (pylint): this helps analyse the code for bugs and style issues. It also might not work first time but we can fix shortly…

If the terminal window suggests upgrading pip, the Python package manager, then go for it by running the following in the terminal window:

python3 -m pip install --upgrade pip

Make a virtual environment

A virtual environment is a self-contained directory tree that contains a Python installation for a particular version of Python

https://docs.python.org/3/tutorial/venv.html

Each project can use its own virtual environment to ensure any modules it requires don’t clash with modules in other projects.

From the terminal create a virtual environment:

python3 -m venv .venv

Visual Studio Code will detect this new environment and offer to select it for the current project folder – select Yes:

Because this is a new Python environment you may need to install the linter again:

Now – the bit that confused me… the project is now using the .venv virtual environment:

However, the terminal session has so far only created the environment, it has not activated it for itself. The shell identifier says:

jon@Jons-MacBook-Pro Python %

There are two ways to fix this. First, using the source command in the terminal window:

source .venv/bin/activate

Second, by creating a new Terminal session using the command palette. (Select View, ten Command Palette):

Now the terminal shows that it’s using the virtual environment:

Install OpenCV

At last we can install OpenCV. Using the terminal session in the virtual environment we can first search for OpenCV packages:

python3 -m pip search opencv  

When called with -m module-name, the given module is located on the Python module path and executed as a script

https://docs.python.org/3/using/cmdline.html

We see results like this:

opencv-utils (0.0.2) – OpenCV Utilities
ctypes-opencv (0.8.0) – ctypes-opencv – A Python wrapper for OpenCV using ctypes
opencv-wrapper (0.2.3) – A Python wrapper for OpenCV.
opencv-cython (0.4) – An alternative OpenCV wrapper
dajngo-opencv (0.3) – Django Opencv integratio
opencv-python (4.1.2.30) – Wrapper package for OpenCV python bindings

For this test I’m using opencv-python. The details on version 4.1.2.30 can be found on the Python Package Index site. Interestingly this version was only released a few hours ago and says it supports Python 3.8 😬 I guess I’ll try this on a virtual machine first to check it’s all ok!

Install OpenCV using pip:

python3 -m pip install opencv-python

Write some code and fix the linter

First test: import the OpenCV module and print the library version.

import cv2
print('Using OpenCV version {0}'.format(cv2.__version__))

After running this output is shown in the terminal:

But – there’s a problem. In the editor the linter is suggesting that cv2 is not a known module:

This has been seen before on the pylint GitHub issues page. For me, the solution is to edit the .vscode settings. Using ⇧⌘E (shift+command+E) to view the explorer page, expand the .vscode file and click settings.json:

Add a comma to the end of the line of the existing setting, then add the following new setting:

"python.linting.pylintArgs": ["--generate-members"]

My settings now look like this:

And now the red squiggle has gone from cv2.__version__ 😀

All that remains is to learn Python and OpenCV which will surely lead to great things!

Hope this helps.

Apple’s best hidden Trackpad gesture!

Apple trackpads, both external and built-in, have one fantastic gesture which is not enabled by default:

3 finger drag

https://support.apple.com/en-us/HT204609

This allows you to easily grab windows and files without needing to hold down the trackpad like a mouse button. For GUI designers it means quickly resizing and positioning controls, like Tom Cruise in Minority Report 😎

3 useful keyboard mods for better editor support on Windows 10 using Parallels

Tested with Parallels Desktop 15 Pro Edition on macOS Catalina.

Problem 1: using Visual Studio or notepad++ or any similar multi-document application in Windows 10 on a Mac using Parallels, ⌘W (Command+W) maps to Alt+F4. This means To close just an editor page you have to revert to the original Windows shortcut of Control+F4 which is a minor pain on a Mac with the Touch Bar instead of function keys.

Solution 1: change the Parallels preferences to remap ⌘W to Ctrl+F4. ⌘Q will still close an application but ⌘W will now close an internal editor window; this is the same behaviour used in Safari to close the whole application (⌘Q) or close just one page (⌘W).

Problem 2: Control+Tab and Shift+Control+Tab don’t switch between editor windows.

Solution 2: a recent update to Parallels resulted in these shortcuts being used for Parallels tab switching. They don’t get passed on to the VM. Just unchecking these shortcuts fixes the problem.

Windows system timer granularity

While running one of my apps on a Windows 10 VM I noticed that the timing was much different to that seen on the host PC. After lots of digging I finally found that the granularity of the system timer on the VM was around 16ms versus around 0.5ms on the host PC. My app is using some 1-5 millisecond sleeps but when the granularity is 16ms then 1ms becomes 16! (The actual granularity is 15.6ms due to a default 64Hz timer frequency).

Some cool resources on the web related to this:

Solved my problems by setting the granularity to the minimum supported by the PC; this setting remains in place until the application exits. So it just seems that my VM doesn’t have anything running that would otherwise cause the timer to run more quickly than the default (of 64Hz), whereas my development PC must have all sorts that are running the timer flat out; probably one reason my battery goes down more quickly than expected!

To query and change the granularity I used theses methods via C#:

I then wrote a little wrapper class to let me play with the timings using the .Net TimeSpan. Note: this is a frustrating struct to use because it really doesn’t want to use fractions of a millisecond without more than a bit of persuasion, specifically because FromMilliseconds will only consider the requested value to the nearest millisecond.

/// <summary>
/// Utility to query the timer resolution
/// </summary>
class TimerResolution
{
    [DllImport("ntdll.dll", SetLastError = true)]
    private static extern int NtQueryTimerResolution(out int MinimumResolution, out int MaximumResolution, out int CurrentResolution);


    [DllImport("ntdll.dll", SetLastError = true)]
    private static extern int NtSetTimerResolution(int DesiredResolution, bool SetResolution, out int CurrentResolution);


    private static TimeSpan TimeSpanFrom100nsUnits(int valueIn100nsUnits)
    {
        var nanoseconds = (double)valueIn100nsUnits * 100.0;
        var seconds = nanoseconds / 1000000000.0;
        var ticks = seconds * System.Diagnostics.Stopwatch.Frequency;
        var timeSpan = TimeSpan.FromTicks((long)ticks);
        return timeSpan;
    }


    private static (TimeSpan min, TimeSpan max, TimeSpan cur) Query()
    {
        NtQueryTimerResolution(out var min, out var max, out var cur);
        return (min: TimeSpanFrom100nsUnits(min), max: TimeSpanFrom100nsUnits(max), cur: TimeSpanFrom100nsUnits(cur));
    }


    /// <summary>Gets the minimum timer resolution</summary>
    public static TimeSpan MinResolution => Query().min;


    /// <summary>Gets the maximum timer resolution</summary>
    public static TimeSpan MaxResolution => Query().max;


    /// <summary>Gets/sets the current timer resolution</summary>
    public static TimeSpan CurrentResolution
    {
        get { return Query().cur; }

        set
        {
            var valueInSeconds = value.TotalMilliseconds / 1000.0;
            var valueInNanoseconds = valueInSeconds * 1000000000.0;
            var valueIn100Nanoseconds = (int)(valueInNanoseconds / 100.0);
            NtSetTimerResolution(DesiredResolution: valueIn100Nanoseconds, SetResolution: true, out _);
        }
    }
}

A little test app on my VM produced these results…

Minimum resolution:   15.6ms
Maximum resolution:   0.5ms
Current resolution:   15.6ms

Attempt to change to 2ms
Current resolution:   00:00:00.0020000
DateTime granularity: 00:00:00.0020970
Sleep 0:              00:00:00.0000009
Sleep 1:              00:00:00.0020053

Attempt to change to 5ms
Current resolution:   00:00:00.0050000
DateTime granularity: 00:00:00.0050328
Sleep 0:              00:00:00.0000012
Sleep 1:              00:00:00.0049719

Attempt to change to 0.5ms
Current resolution:   00:00:00.0005000
DateTime granularity: 00:00:00.0005471
Sleep 0:              00:00:00.0000008
Sleep 1:              00:00:00.0011774

Attempt to change to 15.6ms
Current resolution:   00:00:00.0156250
DateTime granularity: 00:00:00.0156280
Sleep 0:              00:00:00.0000011
Sleep 1:              00:00:00.0155707

C# from clause vs nested foreach loops

Short story: writing a unit test for an image processing function, I had the following key parameters for each test:

enum Algorithm {  A, B, C }; 
enum ImageFormat {  Gray, Color }; 
enum ImageSize {  Small, Medium, Large };

I wrote a core test function that worked on just one combination of the 3 parameters. E.g.

void Test(
    Algorithm algorithm, 
    ImageFormat imageFormat, 
    ImageSize imageSize)
{
    Console.WriteLine($"Testing algorithm {algorithm} for " +
        {imageFormat} image with {imageSize} size");

    // the test...
}

Next, I wrote a utility to make iterating the values of an enumeration a little easier.. still not sure why this isn’t part of .Net yet:

class EnumHelper<T>
{
    static public T[] Values
    {
        get { return (T[])Enum.GetValues(typeof(T)); }
    }
}

Then, I wrote the nested loops that built each combination and sent them for testing:

void TestAllV1()
{
    foreach (var algorithm in EnumHelper<Algorithm>.Values)
    {
        foreach (var imageFormat in EnumHelper<ImageFormat>.Values)
        {
            foreach (var imageSize in EnumHelper<ImageSize>.Values)
            {
                Test(algorithm, imageFormat, imageSize);
            }
        }
    }
}

Now there’s nothing really wrong with the above, but it looked like something that should be able to be written more simply. So I came up with this:

void TestAllV2()
{
    var tests =
        from algorithm in EnumHelper<Algorithm>.Values
        from imageFormat in EnumHelper<ImageFormat>.Values
        from imageSize in EnumHelper<ImageSize>.Values
        select (algorithm, imageFormat, imageSize);


    foreach (var test in tests)
    {
        Test(test.algorithm, test.imageFormat, test.imageSize);
    }
}

The use of the from clause seems better, mainly due to the reduced level of nesting. The Visual Studio 2019 code analysis metrics are interesting:

MemberMICycC
ClsC
TestAllV1() 76 4 7
TestAllV2()69 2 18

Where:

  • MI: Maintainability Index
  • CycC: Cyclomatic Complexity
  • ClsC: Class coupling

So the foreach approach is (allegedly!) more maintainable, while the from clause method has a lower cyclomatic complexity. This latter metric reinforces the idea that this is slightly simpler than the foreach technique.

It’s also quite easy to add specific filtering inside the tests generator. For example, to quickly stop testing the B algorithm:

var tests =
    from algorithm in EnumHelper<Algorithm>.Values
    where algorithm != Algorithm.B
    from imageFormat in EnumHelper<ImageFormat>.Values
    from imageSize in EnumHelper<ImageSize>.Values
    select (algorithm, imageFormat, imageSize);

Food for thought 🙂

Edit: actually found another way to do this using the Linq SelectMany method, but I’m not keen on this:

void TestAllV3()
{
    var tests =
        EnumHelper<Algorithm>.Values.SelectMany(
            algorithm => EnumHelper<ImageFormat>.Values.SelectMany(
                imageFormat => EnumHelper<ImageSize>.Values.Select(
                    imageSize => (algorithm, imageFormat, imageSize))));

    foreach(var test in tests)
    {
        Test(test.algorithm, test.imageFormat, test.imageSize);
    }
}

Apple Trackpad on Windows with 3-finger drag!

A month ago, when I needed to get a new laptop for work, I switched from a MacBook Pro to a Dell XPS 9750, saving £100’s of pounds for what on paper is an almost identical laptop. Except it isn’t. Dell’s XPS is astonishingly good, and it’s a priviledge to own one, but I’ve become an Apple devotee over the years and can’t change it. My days are spent programming for Windows so I need a fast PC development environment and therefore the XPS makes sense. With the MBP I need to use VMWare Fusion or Parallels, and Apple are really pushing my (and other people’s) limits by charging so much money for memory and SSD upgrades. So I went for the XPS. 

Of all the little things I miss, 3 finger dragging is way up there. To be fair, the XPS’s trackpad is one of the best out there for Windows laptops, and with Windows 10 there is a double-tap to drag gesture which is fantastic. But the trackpad is too small and two taps (versus a single 3-finger drag) are one tap too many.

After a bit of Googling, I found Magic Utilities, a company that make drivers and utilities for Apple wireless keyboards, the magic mouse, and the Magic Trackpad. When I used Windows as a virtual machine on the MBP I got used to a particular mapping of the Alt and Cmd keys; the keyboard utility app lets me restore these mapping. Also, I’ve just discovered that the Eject key now becomes delete, although my fingers have 5 years of muscle memory which means that delete = Function + Backspace.

(Now, with the Apple keyboard, the function and control keys are where they should be IMO! There are loads of people asking Dell whether it’s possible to swap these keys on the Dell keyboards but it really can’t be done.)

Apple’s wireless Magic Trackpad performs terribly on Windows 10 by default. There are few (if any) gestures, and the whole feel of the cursor on the screen is terrible. Magic Utilities Trackpad app changes this completely:

Now I can scroll, drag, single-finder tap, and not a mechanical click in sight. 

I’ve now bought a one year licence after trialling this for a couple of weeks without any problems. Now I work with the XPS lid closed and just the wireless mouse and keyboard in front of an external 4K monitor. 

For the odd occasional where I might work 12 hours a day the little things, both positive and negative, soon accumulate, so the ~£20 cost of this bundle is really a bargain. 

Now if I could just find a similar utility to turn Windows 10 into Mac OS… 😀

Scanning receipts from iPhone to OneNote (App Store)

There are many ways to scan a document, such as a receipt, and import it into OneNote. I’m now using the App Store version of OneNote on Windows10 and one of the (many) limiations is the inability to resize large images without having to cut them out, edit, and paste back in.

I’ve tried a bunch of scanning apps on the iPhone and one of the main issues is finding something than can scan, including auto-detection of document borders, adjust brightness and contrast, and send to OneNote. The Adobe Scanner app for the iPhone is awesome but the PDF appears in OneNote as an icon and doesn’t appear to want to change into a readable document!

So for now the fastest way I’ve found is to use the Microsoft Office Lens iPhone app – this does a great job of scanning and although I can’t change the image resolution I can set it as a simple black and white image and send directly to OneNote.

First, run Office Lens.

1

 

Then let it find your document – it helps to have some natural contrast between the edges of the document and the background. 

Make sure that the Document option is selected:

Display the filters after scanning the document – just slide your finger up to view them:

I’m choosing the black and white option as it seems to help keep the file size down and also makes the receipt more readable:

After applying the filter click on the Done button at the bottom and go through the export options:

Enter a title for the note and choose which section on OneNote to save it to:

 

It would be a great improvement if the Office Lens app could have a quality or file size option to reduce the amount of data stored in ON. There are already suggestions for this on the Office Lens feedback hub going back to 2015 – not sure if anyone’s listening tho!