Apple Trackpad on Windows with 3-finger drag!

A month ago, when I needed to get a new laptop for work, I switched from a MacBook Pro to a Dell XPS 9750, saving £100’s of pounds for what on paper is an almost identical laptop. Except it isn’t. Dell’s XPS is astonishingly good, and it’s a priviledge to own one, but I’ve become an Apple devotee over the years and can’t change it. My days are spent programming for Windows so I need a fast PC development environment and therefore the XPS makes sense. With the MBP I need to use VMWare Fusion or Parallels, and Apple are really pushing my (and other people’s) limits by charging so much money for memory and SSD upgrades. So I went for the XPS. 

Of all the little things I miss, 3 finger dragging is way up there. To be fair, the XPS’s trackpad is one of the best out there for Windows laptops, and with Windows 10 there is a double-tap to drag gesture which is fantastic. But the trackpad is too small and two taps (versus a single 3-finger drag) are one tap too many.

After a bit of Googling, I found Magic Utilities, a company that make drivers and utilities for Apple wireless keyboards, the magic mouse, and the Magic Trackpad. When I used Windows as a virtual machine on the MBP I got used to a particular mapping of the Alt and Cmd keys; the keyboard utility app lets me restore these mapping. Also, I’ve just discovered that the Eject key now becomes delete, although my fingers have 5 years of muscle memory which means that delete = Function + Backspace.

(Now, with the Apple keyboard, the function and control keys are where they should be IMO! There are loads of people asking Dell whether it’s possible to swap these keys on the Dell keyboards but it really can’t be done.)

Apple’s wireless Magic Trackpad performs terribly on Windows 10 by default. There are few (if any) gestures, and the whole feel of the cursor on the screen is terrible. Magic Utilities Trackpad app changes this completely:

Now I can scroll, drag, single-finder tap, and not a mechanical click in sight. 

I’ve now bought a one year licence after trialling this for a couple of weeks without any problems. Now I work with the XPS lid closed and just the wireless mouse and keyboard in front of an external 4K monitor. 

For the odd occasional where I might work 12 hours a day the little things, both positive and negative, soon accumulate, so the ~£20 cost of this bundle is really a bargain. 

Now if I could just find a similar utility to turn Windows 10 into Mac OS… 😀

Scanning receipts from iPhone to OneNote (App Store)

There are many ways to scan a document, such as a receipt, and import it into OneNote. I’m now using the App Store version of OneNote on Windows10 and one of the (many) limiations is the inability to resize large images without having to cut them out, edit, and paste back in.

I’ve tried a bunch of scanning apps on the iPhone and one of the main issues is finding something than can scan, including auto-detection of document borders, adjust brightness and contrast, and send to OneNote. The Adobe Scanner app for the iPhone is awesome but the PDF appears in OneNote as an icon and doesn’t appear to want to change into a readable document!

So for now the fastest way I’ve found is to use the Microsoft Office Lens iPhone app – this does a great job of scanning and although I can’t change the image resolution I can set it as a simple black and white image and send directly to OneNote.

First, run Office Lens.



Then let it find your document – it helps to have some natural contrast between the edges of the document and the background. 

Make sure that the Document option is selected:

Display the filters after scanning the document – just slide your finger up to view them:

I’m choosing the black and white option as it seems to help keep the file size down and also makes the receipt more readable:

After applying the filter click on the Done button at the bottom and go through the export options:

Enter a title for the note and choose which section on OneNote to save it to:


It would be a great improvement if the Office Lens app could have a quality or file size option to reduce the amount of data stored in ON. There are already suggestions for this on the Office Lens feedback hub going back to 2015 – not sure if anyone’s listening tho!

TimeSpan.FromMilliseconds rounding!

Today’s fairly brutal gotcha: TimeSpan.FromMilliseconds accepts a double but internally rounds the value to a long before converting to ticks (multiplying by 10000).

For example, using C# interactive in VS2017:

> TimeSpan.FromMilliseconds(1.5)

> TimeSpan.FromMilliseconds(1234.5678)

Using .FromTicks works as expected:

> TimeSpan.FromTicks(15000)

To be fair this is the documented behavior:

The value parameter is converted to ticks, and that number of ticks is used to initialize the new TimeSpan. Therefore, value will only be considered accurate to the nearest millisecond.

But really, it isn’t expected since the input is a double!

This all came to light because a camera system I’m involved with started overexposing –  the integration time was programmed as 2ms instead of the desired 1.5ms. Hmmph!

So a little alternative:

> TimeSpan TimeSpanFromMillisecondsEx(double ms) =>
    TimeSpan.FromTicks((long)(ms * 10000.0))

> TimeSpanFromMillisecondsEx(1.5)


Note: the FromMilliseconds method delegates to an internal Interval method, passing the milliseconds value and 1 as the scale:

private static TimeSpan Interval(double value, int scale)
    if (double.IsNaN(value))
        throw new ArgumentException(Environment.GetResourceString("Arg_CannotBeNaN"));
    double num = value * scale;
    double num2 = num + ((value >= 0.0) ? 0.5 : -0.5);
    if ((num2 > 922337203685477) || (num2 = 0.0) ? 0.5 : -0.5);
    if ((num2 > 922337203685477) || (num2 < -922337203685477))
        throw new OverflowException(Environment.GetResourceString("Overflow_TimeSpanTooLong"));
    return new TimeSpan(((long) num2) * 0x2710L);



VS2017 and NuGet for C++/CLI

At the time of writing it still isn’t possible to use the NuGet package manager for C++/CLI projects. My workaround is to:

  1. Add a new C# class library project to the solution.
  2. Add any NuGet packages to this new project.
  3. Configure the C# project so it always builds in Release configuration.
  4. Use the Build Dependencies dialog to ensure that the new C# project is built before the C++/CLI project.
  5. Add to the C++/CLI project a reference to the NuGet packages by using the output folder of the C# project.


Create a new solution with a C++/CLI class library…

Add a C# class library (.Net Framework), delete Class1.cs, then go to the solution’s NuGet package manager:

2018-09-05 12_19_13-.png

Install the Newtonsoft.Json package for the C# project:2018-09-05 12_29_01-Solution3 - Microsoft Visual Studio.png

Change the C# build configuration so that the Release configuration builds for both Debug and Release:2018-09-05 12_31_24-Configuration Manager.png

Then delete the unused Debug configuration:2018-09-05 12_31_42-Configuration Manager.png

2018-09-05 12_32_14-Configuration Manager.png

Make C++/CLI project dependent on the C# project:2018-09-05 12_34_09-Solution3 - Microsoft Visual Studio.png

(Note: I use the above for this dependency rather than adding a reference to the project to avoid copying the unused C# project to the C++/CLI’s output folders.)

Build the solution.

Add a reference to the Newtonsoft library by using the Browse option in the Add References dialog and locating the C# project’s bin/Release folder:

2018-09-05 12_38_57-Select the files to reference....png

Build the solution again. The Newtonsoft library will now be copied to the C++/CLI build folder:

2018-09-05 12_40_59-Debug.png

First test: add some code to the C++/CLI class to demonstrate basic JSON serialisation:

#pragma once

using namespace System;

namespace CppCliDemo {

	using namespace Newtonsoft::Json;

	public ref class Class1

		String^ test = "I am the walrus";


		property String^ Test
			String^ get() { return this->test; }
			void set(String^ value) { this->test = value; }

		String^ SerialiseToJson()
			auto json = JsonConvert::SerializeObject(this, Formatting::Indented);
			return json;

Then add a simple C# console app, reference just the C++/CLI project, and test the class:2018-09-05 12_50_22-Reference Manager - CSharpConsoleTest.png

static void Main(string[] args)
var test = new CppCliDemo.Class1();
var json = test.SerialiseToJson();


The output – nicely formatted JSON 🙂

2018-09-05 12_51_48-C__Users_Jon_Source_Repos_Solution3_CSharpConsoleTest_bin_Debug_CSharpConsoleTes.png

Second test, make sure a clean rebuild works as expected:

  1. Close the solution
  2. Manually delete all binaries and downloaded packages
  3. Re-open solution and build
  4. Verify that the build order is:
    1. CSharpNuGetHelper
    2. CppCliDemo
    3. CSharpConsoleTest (my console test demo app)
  5. Run the console app and verify the serialisation works as before



Associate VS2017 with JSON files

Double-clicking a JSON file to try and open it in Visual Studio 2017 Professional doesn’t work; VS2015 worked fine. This link explains how to fix for the Community edition. The same principle applies for VS2017.

Using regedt32 first find the Visual Studio magic number:

Key: Computer\HKEY_CLASSES_ROOT\.json\OpenWithProgids
Value: VisualStudio.json.a8eb385c

2017-10-19 08_32_54-Registry Editor


Then find the associated shell key and create a new sub-key ‘Command’ with the path to devenv.exe as the default value:

Key: Computer\HKEY_CLASSES_ROOT\VisualStudio.json.a8eb385c\shell\Open\Command
Value: "C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\Common7\IDE\devenv.exe"

2017-10-19 08_34_27-Registry Editor


Right-clicking a JSON file and selected Open with now looks like this:

2017-10-19 08_37_22-Sphere


Raspberry Pi Desktop virtual machine

This weekend I discovered that there is a Linux distribution, Debian Jessie, that now has the Raspberry Pi Desktop, download from here.

So I installed it as a virtual machine on my Windows 10 PC using VMWare Professsional Pro 14. The only difficulty was getting VMWare Tools working to allow automatic screen resizing and file sharing. Found some good info on the VMWare communities site (search for vmware tools debian jessie). From this thread there is a great perl script to help install VMWare Tools. The only modification I made was to delete –default from the line that runs the VMWare installer, without which the script will suggest an alternative option and abandon the installation.

I made a YouTube video of my virtual machine installation and VMWare tools configuration:


The script for VMWare tools:

sudo apt-get update 
sudo apt-get upgrade 
echo "Do go and mount your cdrom from the VMware menu" 
echo "press any key to continue" 
read -n 1 -s 
mount /dev/cdrom 
cd /media/cdrom0 
cp VMwareTools-*.tar.gz /tmp 
cd /tmp 
tar xvzf VMwareTools-*.tar.gz 
cd vmware-tools-distrib/ 
sudo apt-get install --no-install-recommends libglib2.0-0 
sudo apt-get install --no-install-recommends build-essential 
sudo apt-get install --no-install-recommends gcc-4.3 linux-headers-`uname -r` 
sudo ./ 
sudo /etc/init.d/networking stop 
sudo rmmod pcnet32 
sudo rmmod vmxnet 
sudo modprobe vmxnet 
sudo /etc/init.d/networking start




VMWare Windows 10 expand/partition problem


I had a Windows 10 VM, managed using VMWare Workstation Pro 12. The VM was originally created with the default 60GB hard disk.

I needed to expand the disk, so I shutdown the VM, removed all snapshots, expanded the virtual HD to 120GB, and rebooted the VM. The plan was to use Windows 10’s disk management tool to expand the original partition and merge in the new partition.

But the recovery partition was sandwiched between the original and new partitions, and couldn’t be deleted using the Disk Management tool:

Untitled picture.png

I found the basics of how to fix this on the VMWare knowledge base. I’m adding my procedure here because it includes some useful screenshots.


I ran diskpart to work with partitions. (I ran as admin, but don’t think it was required.)

1Untitled picture

From the DISKPART shell, I then used the following to select and then remove the unwanted partition:

DISKPART> list volume

Volume ### Ltr Label Fs Type Size Status Info
 ---------- --- ----------- ----- ---------- ------- --------- --------
* Volume 0 D DVD-ROM 0 B No Media
 Volume 1 C NTFS Partition 59 GB Healthy System
 Volume 2 NTFS Partition 450 MB Healthy Hidden

DISKPART> select volume 2

Volume 2 is the selected volume.

DISKPART> delete partition override

DiskPart successfully deleted the selected partition.

The Disk Management window showed the new partition layout:

2Untitled picture.png

Next, I right-clicked on the C: partition and chose ‘Extend Volume’:



At last, a single 120GB partition.




C# TimeSpan TypeConverter and UITypeEditor

Code for this post is on GitHub.

I have an application that presents various TimeSpan properties to a user. The default string conversion isn’t great, in fact for anything other than hh:mm:ss it isn’t intuitive.

A TimeSpan of 1 day, 2 minutes, 3 hours, 4 seconds, and 5 milliseconds is shown in the example below:


After a little noodling I found some articles that helped me put together something better (at least for me!).

The first feature is the presentation of a TimeSpan instance as a string:


The second feature is the ability to convert back from a string. For example, entering a value of 1h, 5s:


… becomes…


And finally, the property can present an interactive editor via a dropdown:


Here’s an example of how the new classes are used as attributes on a TimeSpan:

[Editor(typeof(TimeSpanUIEditor), typeof(UITypeEditor))]
[DefaultValue(typeof(TimeSpan), "1.02:03:04.005")]
[DisplayName("Custom 1")]
public TimeSpan A { get; set; } = new TimeSpan(1, 2, 3, 4, 5);

VS2015 Update 2 VC.DB git mayhem

So today, after updating Visual Studio 2015 to Update 2, I was committing some changes to a project. Nestled amongst my own changes, deleted files, and new files, was a new VS2015 file, AOX3.VC.db (AOX3 is the name of my solution). This file is the new database engine and replaces (I believe) the SDF file. In Update 1 this was an experimental feature, but as of update 2 it’s official:

The new SQLite-based database engine is now being used by default.

I inadvertently added the file to my local git repository and then committed the changes.(I did the same thing on a couple of other projects, and made several more modifications and commits afterward).

The problem is, this file shouldn’t be in git, especially because it is around 150MB !

I only realised I had a problem when I tried to sync to my online (remote) repository, hosted by VSTS – luckily the attempt failed. The Synchronization panel had the following error message:

Failed to push to the remote repository. See the Output window for more details.

The output window had the following details:

Error encountered while pushing to the remote repository: Error while copying content to a stream.

Inner Exception:

Unable to read data from the transport connection: The connection was closed.

After a bit of googling I found I had to resort to the nuclear option to fix the problem.. deleting the file from the local git commit history.

Here’s what I did:

1. In VS2015 find the first commit that included the file. To do this I went to the Sync panel on the Team Explorer page and found the outgoing commit, then double-clicked it to see the commit details:


Screen Shot 2016-03-30 at 20.25.24.png

Screen Shot 2016-03-30 at 20.27.43.png

2. Note the parent commit ID of the first commit that included the new file (shown above, 645faa8c), then close VS2015.

3. Run a git bash shell in the solution folder. Then use the filter-branch command to delete the file from the last two commits. This required specifying a range of commits from the parent of the first commit all the way up the HEAD (the latest commit). This means noting the parent commit ID of the first commit that included the new file – as shown in the above screenshot. Here’s the git session:

jon@DESKTOP-0J37S2C MINGW64 /c/dev/Projects/Amber/AOX3 (dev_jon)
$ git filter-branch --tree-filter 'rm -f AOX3.VC.db' 645faa8c..HEAD
Rewrite 2fa87507296744092c2ac8b0c300b8c5973276ef (1/2) (0 seconds passed, remain
Rewrite 2917e9beea5ec0f546538345e33bd01586d359fb (2/2) (15 seconds passed, remaining 0 predicted)
Ref 'refs/heads/dev_jon' was rewritten

4. Open VS2015 and check the commits.. now the added/modified database file has disappeared from the list of changes:

Screen Shot 2016-03-30 at 20.35.52.png

5. I then edited my .gitignore file and added *.VC.db, then committed this mod and finally checked everything synchronized.. all worked as expected !


I guess the moral of the story is: don’t be flippant when adding new files, especially those you don’t recognize as your own. As the filter-branch documentation states:

This occurs fairly commonly. Someone accidentally commits a huge binary file with a thoughtless git add ., and you want to remove it everywhere.

This thoughtless programmer signing off, time to sleep 😉



OpenCV3, VS2015


So I finally got myself a clean Windows 10 VM for the next couple of years development, installed Visual Studio 2015, downloaded OpenCV, migrated my VS2013 OpenCV C++ test app, and.. and.. of course it didn’t work 😦

This was only an issue because I was using the static OpenCV libraries rather than the pre-built binaries. It looks like VS2015 is quite happy using the VC12 OpenCV binaries.

The current release of OpenCV 3.0.0 (04/06/2015) comes with pre-built static and import libraries for VS2013 and VS2010, but not for VS2015. I found an excellent article on the Colorado School of Mines (!) WebSite on how to build VS2015 libraries. But I needed some other tweaks beyond a basic build of OpenCV. So the goal of this note is to show how I built OpenCV for use with Visual Studio 2015 with the following configurations:

  1. 32- and 64-bit support.
  2. Static and Shared OpenCV libraries.
  3. Debuggable static libraries by having the correct combination of source code, libraries and PDBs.

The basic procedure I’m using (after several iterations) is:

  • Clone version 3.0.0 of OpenCV to a local development folder. I’m going to keep these source files handy because the debugger can use them in Debug builds
  • Install CMake, a build configuration tool.
  • Generate 4 VS2015 solutions
    • x86, static
    • x86, shared
    • x64, static
    • x64, shared

Clone OpenCV

This assumes you have VS2015 and the latest Git for Windows.

The trick, at least for me, is to clone into a folder that will be used to keep the OpenCV source code. This folder won’t get deleted during the steps below. I’m also only interested in the official latest release, which at the time of writing is version 3.0.0.

I have a standard development folder structure which is currently based on c:\dev, and I have a standard sub-folder structure for all third party libraries, c:\dev\ThirdParty. So I’m going to put OpenCV into c:\dev\ThirdParty\OpenCV3.0.0.

Run a git bash shell (from any folder) and clone version 3.0.0. Note the use of forward slashes in both paths !

Here’s my bash session:
jon@DESKTOP-0J37S2C MINGW64 /c/dev/ThirdParty/OpenCV3.0.0 ((3.0.0))
$ git clone --branch 3.0.0 C:/dev/ThirdParty/OpenCV3.0.0/sources
Cloning into 'C:/dev/ThirdParty/OpenCV3.0.0/sources'...
remote: Counting objects: 183285, done.
remote: Compressing objects: 100% (52/52), done.
remote: Total 183285 (delta 22), reused 6 (delta 6), pack-reused 183224
Receiving objects: 100% (183285/183285), 419.28 MiB | 3.54 MiB/s, done.
Resolving deltas: 100% (124890/124890), done.
Checking connectivity... done.
Note: checking out 'c12243cf4fccf5df7b0270a32883986b373dca7b'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b <new-branch-name>

Checking out files: 100% (4656/4656), done.
Now I have the following folder structure:

Get CMake

There isn’t a VS2015 solution bundled with the OpenCV download so you must create one. This is made easy by using CMake, a cross-platform build utility. Download and install from here – I used the Windows (Win32 Installer) ‘cmake-3.4.1-win32-x86.exe’, and kept the default installation options.

Create the VS2015 Solutions

The basic procedure with CMake is:

  • Set the source and build folders
  • Click Configure
  • Choose the generator (compilers)
  • Tweak the options
  • Click Generate

I’m going to place each solution in a unique build folder and will start with:

  • 32-bit (x86)
  • Shared OpenCV_World300.dll
  • Static C runtime (CRT)
  • No test libraries (because they don’t build cleanly for me, and I want a clean build for the moment)

Run CMake and set the source code folder. Then choose a folder for the build. Make sure the build folder doesn’t already exist. E.g.


The click Configure, accept any request to create the build folder, and then choose the generator for the project. For x86:


The first pass at the configuration takes a minute or so and may download some IPPI files. It also ends up looking quite bad but the red items are just new values (which initially means everything):


Use the Search box to find and verify/update the following settings:

  • BUILD_opencv_world: True

For example, to find and update the OpenCV_World option:



Click Generate. Then check you have a new VS2015 solution (but don’t open or build yet):

  • C:\dev\ThirdParty\OpenCV3.0.0\build_x86_shared\OpenCV.sln

For the remaining 3 VS2015 solutions first change the build path, click Configure, apply the settings, and click Generate.

  • x86, static
    • Build path: C:/dev/ThirdParty/OpenCV3.0.0/build_x86_static
    • Generator: Visual Studio 14 2015
    • BUILD_opencv_world: False


  • x64, shared
    • Build path: C:/dev/ThirdParty/OpenCV3.0.0/build_x64_shared
    • Generator: Visual Studio 14 2015 Win64
    • BUILD_opencv_world: True


  • x64, static
    • Build path: C:/dev/ThirdParty/OpenCV3.0.0/build_x64_static
    • Generator: Visual Studio 14 2015 Win64
    • BUILD_opencv_world: False


Build x86 static

Close CMake and open VS2015, then open the x86 static solution:

  • C:\dev\ThirdParty\OpenCV3.0.0\build_x86_shared\OpenCV.sln

Give it a minute or two for the parsing stage to complete, select the Debug|Win32 configuration, and build the solution.


I got the following output:

Build: 14 succeeded, 0 failed, 0 up-to-date, 5 skipped

Next build the CMakeTargets/INSTALL project – this will put the output files in an install folder which we’ll use later on:


You can check you have the opencv_world300 debug DLL:


Now switch to the Release|Win32 configuration, build the solution, and build the CMakeTargets/INSTALL project. This will add the opencv_world release DLL to the install folder.

Now open the x64 shared solution and repeat the above to get the debug and release opencv_world300 DLLs. Note the configurations are:

  • Debug|x64
  • Release|x64


Open the x86 static solution: before build we must fix a glitch with the PDB file names. For some reason the debug PDBs were not configured to have a d appended to the filename. This means the debug and release PDBs will all use the same filenames. For example:

  • IlmImf.lib -> IlmImf.pdb
  • IlmImfd.lib -> IlmImf.pdb

There is probably a quick way to fix this using the CMake tools, but I don’t know how to ! Instead, do the following:

  1. Multi-select all the projects in the 3rdparty groupW10_Dev.png
  2. Open the Property Pages (right-click and select Properties).
  3. Set the Configuration to ‘All Configurations’
  4. Navigate in the tree-view to C/C++ / General
  5. Change the ‘Debug Information Format’ to ‘C7 compatible’
  6. Hit OK to apply the changes and close the Property Pages.

Repeat the above procedure for all projects in the following groups:

  • applications
  • modules
  • object_libraries

Why do this ?

  1. Instead of producing .lib and .pdb files we now have the PDB information merged into the .lib.
  2. It solves the problem of the debug build PDBs not having different names from the release equivalents.
  3. The CMakeTargets/INSTALL project doesn’t copy the PDBs, meaning a further manual step would be required to find the PDBs and copy them to the install folder.

Is there a better way – I’m sure there is, but this is quite simple and quick and if it works then why not ? (One good reason to not use this approach is if you’re going to work on the OpenCV source code; using the C7 option takes away certain compiler and linker options that could make for much faster builds). There’s plenty of discussion on the web about this, for example on StackOverflow.

Repeat the previous build procedure to get the Debug and Release builds and INSTALLs done. Then repeat again for the x64 (remembering to modify the Debug Information Format settings first).

Consolidating the build

I want to replicate the folder structure of the official release of OpenCV as much as possible. So first, create a new build folder:

  • C:\dev\ThirdParty\OpenCV3.0.0\build

Then copy the contents of each of the install folders into the above build folder. Skip over duplicate files.

  1. build_x64_shared
  2. build_x64_static
  3. build_x86_shared
  4. build_x86_static

My final folder structure is around 1.2GB and looks like this



All four of the build_XXX folders can now be deleted – for me they take up around 10GB. Alternatively you can reload each solution, select Batch Build, Select All, then clean. This drops the folders down to around 2GB. For me I’m going to ZIP and archive them, in case I made a mistake !

Testing the build

One of my requirements is to make sure colleagues can work with these libraries, even if they use a different folder structure to me for their own projects. The only prerequisite is that there’s a ThirdParty\OpenCV3.0.0 folder; but ThirdParty can be anywhere.

My first trick is to make a User Macro in VS2015 to set the ThirdParty folder location. This is another example of something that could probably be done better, but I don’t want to change the system PATH and I need something simple. My reference for User Macros is from this MSDN page.

  1. Open or create a C++ project.
  2. View the Property Manager:
  3. Select the Microsoft.Cpp.Win32.user node of any project, right-click and select Properties. Then select the User Macros setting and add a Macro called ThirdParty:
  4. Apply the changes and close the dialog, then close the Property Manager.

The macro can now be used in projects which may be distributed among other developers; each developer just needs to have their ThirdParty folder (with OpenCV3.0.0 inside) and this macro.

As an example, I have a 32-bit project using the static OpenCV library. The key settings are:

Additional Include Directories:W10_Dev.png

Additional Library Directories:W10_Dev.png

Additional Dependencies:W10_Dev.png

The list of dependencies for my Release build is:


And for my debug build it is:


To make sure it’s all looking good I build the Debug configuration and check there are no spurious errors or warnings. (For example, before I switched to the C7 debug-info setting I had all sorts of warnings about missing PDBs.)

Then try setting a breakpoint on some OpenCV code and stepping into it: if all goes well you’ll be able to step into and around the OpenCV source code.

Note: if you move your OpenCV distribution to another folder then the debugger will prompt you for the location of the OpenCV source files. This only happens once (as far as I can tell).

To use the DLL version of OpenCV change the following:

  • Linker/General/Additional Library Directories:
  • Linker/Input/Additional Dependencies:

Note: if you run a shared DLL version without making the DLL available then you’ll get something like this:

‘The program can’t start because opencv_world300.dll is missing from your computer.’


To avoid changing the system PATH I added a post-build event to copy the DLL to my project’s output:


A good sample program to start with is Load and Display an Image, on the OpenCV website.

That’s it – I’m done ! After at least 10 variations of these builds, and two or three attempts to write up how I did it, I’m leaving it here.  At least until I figure a better way to do it, or an update is issued from OpenCV, or I figure out how to try and contribute directly to OpenCV, etc.  🙂