After much struggle, learning and googling, I’ve finally made my own small library that provides a simple C# code editor and a bunch of simple C# scripting utilities.
A few years ago I wrote a simple Scintilla.net-based code editor and, with the Microsoft CSharp scripting library, managed to let my apps provide code editing and run-time C# script execution. The code was scrappy and without intellisense also difficult to use for non-trivial scripts.
Then I found the RoslynPad project, which uses AvalonEdit for the underlying text editor to provide a WPF C# scripting editor. As well as a stand-alone application the text editor is also available via NuGet. Since I work almost entirely with WinForms, rather than WPF, I wanted an easy-to-use drag-and-drop widget to provide code editing. So I wrote CDS.CSharpScripting.
As well as the code editor there are classes for script compilation and execution, including EasyScript, a class that allows for one-line compilation and execution. The compiler utilities provide access to the compilation results such as errors and warning.
Example 1:
EasyScript<object>.Go("Console.WriteLine(\"Hello world, from the script!\");");
Output:
Hello world, from the script!
There’s still lots to do on the project including mode demos, getting code actions to work – they actual actions do already work but the menu text is broken.
If you need something with more power than my simple code editor wrapper then the RoslynPad project is available on GitHub. There are also countless demos showing how to use Microsoft’s C# scripting classes directly. But if you just need a quick code editor and the ability to compile and run code at runtime, capture script return values and share data between the host app and the script, then this is not a bad place to start.
Some of my projects always build, whereas others just say they’re already up-to-date.
I’ve finally got something that’s actually helping me isolate the problems, and it’s related to bringing in packages via NuGet that bring in lots of other dependencies.
The first step is to change the MSBuild output from Minimal to Diagnostics:
Run the build once just in case anything really has changed.
Build the project and look at the top of the diagnostics in the output window. As an example, I have this
1>Project 'AmberOptix.AOX3.Scripting' is not up to date. CopyLocal reference 'C:\dev\Projects\Amber\AOX3\AmberOptix.AOX3.Scripting\bin\Debug\System.Diagnostics.StackTrace.dll' is missing from output location.
Next I remove this assembly (System.Diagnostics.StackTrace.dll) from my packages.config file and the references. Then I build again and repeat the process until it eventually says that everything is up-to-date.
For some of the ‘missing’ assemblies I can guess that several others may also not be required. For example I deleted 3 System.Net.XXX packages and references when System.Net.Http was reported as missing.
As a guideline I had to remove over 20 packages and references from my scripting library to get this working.
An alternative to this manual approach of deleting one at a time is to delete all packages and references, then go through a cycle of adding back NuGet packages and normal assembly references as required.
I’m still sure there must be a better and safer way to do this! I think JetBrains’ ReSharper has tools but haven’t had a chance to trial yet.
‘Type here to search’ suddenly stopped this morning on all of my virtual machines. Presumed this was a Parallels problem, however it’s related to Bing and lots of people have this problem…
Windows Search has stopped working. The culprit is Bing search integration. Disable Bing search with this guide: https://t.co/3tfGH22nWI
It took a few attempts to get a compatible Python and OpenCV library running under Visual Studio Code on macOS Catalina using a virtual environment. I made a video to show how I got this going – this post just adds some more details.
There is also an excellent tutorial from Microsoft:
Visual Studio Code running on a virtual machine may have problems rendering the interface. This seems to be related to the underlying Electron framework and GPU acceleration. I made a quick video to show how I got around this:
Fix rendering problems for Visual Studio Code running on a virtual machine
Install Python 3.7.5
A virgin Mac comes with Python 2.7 installed – this is not recommended and V3.7.5 works with OpenCV4 on a Mac. V3.8 does not work at the time of writing (although since I started writing this post it looks like it now does). Download the installer from the main python website by selecting Downloads, Mac OS X, and then selecting the 64-bit installer:
Run the installer – I used all default settings.
Install Visual Studio Code
Download the installer from Visual Studio Code and immediately move the downloaded file to the Applications folder. (This is the actual application, not an installer). Try to run once – macOS will refuse due to security:
Close the message, open System Preferences, and select the Security and Privacy settings. Then select “Open Anyway” to allow VSC.
Visual Studio Code should now start:
Configure Python
Open a folder by selecting Open folder and then add a new file. Save the file using the .py extension:
Visual Studio Code immediately offers to install the Python extension, select Install:
On a virgin Mac there will now be a prompt to install command line developer tools, so click Install if prompted and allow the installation to complete before returning to Visual Studio Code.
The status bar will show the selected interpret if everything has gone well:
Install the linter (pylint): this helps analyse the code for bugs and style issues. It also might not work first time but we can fix shortly…
If the terminal window suggests upgrading pip, the Python package manager, then go for it by running the following in the terminal window:
python3 -m pip install --upgrade pip
Make a virtual environment
A virtual environment is a self-contained directory tree that contains a Python installation for a particular version of Python
opencv-utils (0.0.2) – OpenCV Utilities ctypes-opencv (0.8.0) – ctypes-opencv – A Python wrapper for OpenCV using ctypes opencv-wrapper (0.2.3) – A Python wrapper for OpenCV. opencv-cython (0.4) – An alternative OpenCV wrapper dajngo-opencv (0.3) – Django Opencv integratio opencv-python (4.1.2.30) – Wrapper package for OpenCV python bindings
For this test I’m using opencv-python. The details on version 4.1.2.30 can be found on the Python Package Index site. Interestingly this version was only released a few hours ago and says it supports Python 3.8 😬 I guess I’ll try this on a virtual machine first to check it’s all ok!
Install OpenCV using pip:
python3 -m pip install opencv-python
Write some code and fix the linter
First test: import the OpenCV module and print the library version.
import cv2
print('Using OpenCV version {0}'.format(cv2.__version__))
After running this output is shown in the terminal:
But – there’s a problem. In the editor the linter is suggesting that cv2 is not a known module:
This has been seen before on the pylint GitHub issues page. For me, the solution is to edit the .vscode settings. Using ⇧⌘E (shift+command+E) to view the explorer page, expand the .vscode file and click settings.json:
Add a comma to the end of the line of the existing setting, then add the following new setting:
Tested with Parallels Desktop 15 Pro Edition on macOS Catalina.
Problem 1: using Visual Studio or notepad++ or any similar multi-document application in Windows 10 on a Mac using Parallels, ⌘W (Command+W) maps to Alt+F4. This means To close just an editor page you have to revert to the original Windows shortcut of Control+F4 which is a minor pain on a Mac with the Touch Bar instead of function keys.
Solution 1: change the Parallels preferences to remap ⌘W to Ctrl+F4. ⌘Q will still close an application but ⌘W will now close an internal editor window; this is the same behaviour used in Safari to close the whole application (⌘Q) or close just one page (⌘W).
Problem 2: Control+Tab and Shift+Control+Tab don’t switch between editor windows.
Solution 2: a recent update to Parallels resulted in these shortcuts being used for Parallels tab switching. They don’t get passed on to the VM. Just unchecking these shortcuts fixes the problem.
Short story: writing a unit test for an image processing function, I had the following key parameters for each test:
enum Algorithm { A, B, C };
enum ImageFormat { Gray, Color };
enum ImageSize { Small, Medium, Large };
I wrote a core test function that worked on just one combination of the 3 parameters. E.g.
void Test(
Algorithm algorithm,
ImageFormat imageFormat,
ImageSize imageSize)
{
Console.WriteLine($"Testing algorithm {algorithm} for " +
{imageFormat} image with {imageSize} size");
// the test...
}
Next, I wrote a utility to make iterating the values of an enumeration a little easier.. still not sure why this isn’t part of .Net yet:
class EnumHelper<T>
{
static public T[] Values
{
get { return (T[])Enum.GetValues(typeof(T)); }
}
}
Then, I wrote the nested loops that built each combination and sent them for testing:
void TestAllV1()
{
foreach (var algorithm in EnumHelper<Algorithm>.Values)
{
foreach (var imageFormat in EnumHelper<ImageFormat>.Values)
{
foreach (var imageSize in EnumHelper<ImageSize>.Values)
{
Test(algorithm, imageFormat, imageSize);
}
}
}
}
Now there’s nothing really wrong with the above, but it looked like something that should be able to be written more simply. So I came up with this:
void TestAllV2()
{
var tests =
from algorithm in EnumHelper<Algorithm>.Values
from imageFormat in EnumHelper<ImageFormat>.Values
from imageSize in EnumHelper<ImageSize>.Values
select (algorithm, imageFormat, imageSize);
foreach (var test in tests)
{
Test(test.algorithm, test.imageFormat, test.imageSize);
}
}
The use of the from clause seems better, mainly due to the reduced level of nesting. The Visual Studio 2019 code analysis metrics are interesting:
Member
MI
CycC
ClsC
TestAllV1()
76
4
7
TestAllV2()
69
2
18
Where:
MI: Maintainability Index
CycC: Cyclomatic Complexity
ClsC: Class coupling
So the foreach approach is (allegedly!) more maintainable, while the from clause method has a lower cyclomatic complexity. This latter metric reinforces the idea that this is slightly simpler than the foreach technique.
It’s also quite easy to add specific filtering inside the tests generator. For example, to quickly stop testing the B algorithm:
var tests =
from algorithm in EnumHelper<Algorithm>.Values
where algorithm != Algorithm.B
from imageFormat in EnumHelper<ImageFormat>.Values
from imageSize in EnumHelper<ImageSize>.Values
select (algorithm, imageFormat, imageSize);
Food for thought 🙂
Edit: actually found another way to do this using the Linq SelectMany method, but I’m not keen on this:
void TestAllV3()
{
var tests =
EnumHelper<Algorithm>.Values.SelectMany(
algorithm => EnumHelper<ImageFormat>.Values.SelectMany(
imageFormat => EnumHelper<ImageSize>.Values.Select(
imageSize => (algorithm, imageFormat, imageSize))));
foreach(var test in tests)
{
Test(test.algorithm, test.imageFormat, test.imageSize);
}
}
Today’s fairly brutal gotcha: TimeSpan.FromMilliseconds accepts a double but internally rounds the value to a long before converting to ticks (multiplying by 10000).
The value parameter is converted to ticks, and that number of ticks is used to initialize the new TimeSpan. Therefore, value will only be considered accurate to the nearest millisecond.
But really, it isn’t expected since the input is a double!
This all came to light because a camera system I’m involved with started overexposing – the integration time was programmed as 2ms instead of the desired 1.5ms. Hmmph!
At the time of writing it still isn’t possible to use the NuGet package manager for C++/CLI projects. My workaround is to:
Add a new C# class library project to the solution.
Add any NuGet packages to this new project.
Configure the C# project so it always builds in Release configuration.
Use the Build Dependencies dialog to ensure that the new C# project is built before the C++/CLI project.
Add to the C++/CLI project a reference to the NuGet packages by using the output folder of the C# project.
Example
Create a new solution with a C++/CLI class library…
Add a C# class library (.Net Framework), delete Class1.cs, then go to the solution’s NuGet package manager:
Install the Newtonsoft.Json package for the C# project:
Change the C# build configuration so that the Release configuration builds for both Debug and Release:
Then delete the unused Debug configuration:
Make C++/CLI project dependent on the C# project:
(Note: I use the above for this dependency rather than adding a reference to the project to avoid copying the unused C# project to the C++/CLI’s output folders.)
Build the solution.
Add a reference to the Newtonsoft library by using the Browse option in the Add References dialog and locating the C# project’s bin/Release folder:
Build the solution again. The Newtonsoft library will now be copied to the C++/CLI build folder:
First test: add some code to the C++/CLI class to demonstrate basic JSON serialisation:
#pragma once
using namespace System;
namespace CppCliDemo {
using namespace Newtonsoft::Json;
public ref class Class1
{
private:
String^ test = "I am the walrus";
public:
property String^ Test
{
String^ get() { return this->test; }
void set(String^ value) { this->test = value; }
}
String^ SerialiseToJson()
{
auto json = JsonConvert::SerializeObject(this, Formatting::Indented);
return json;
}
};
}
Then add a simple C# console app, reference just the C++/CLI project, and test the class:
static void Main(string[] args)
{
var test = new CppCliDemo.Class1();
var json = test.SerialiseToJson();
Console.Write(json);
}
The output – nicely formatted JSON 🙂
Second test, make sure a clean rebuild works as expected:
Close the solution
Manually delete all binaries and downloaded packages
Re-open solution and build
Verify that the build order is:
CSharpNuGetHelper
CppCliDemo
CSharpConsoleTest (my console test demo app)
Run the console app and verify the serialisation works as before
This weekend I discovered that there is a Linux distribution, Debian Jessie, that now has the Raspberry Pi Desktop, download from here.
So I installed it as a virtual machine on my Windows 10 PC using VMWare Professsional Pro 14. The only difficulty was getting VMWare Tools working to allow automatic screen resizing and file sharing. Found some good info on the VMWare communities site (search for vmware tools debian jessie). From this thread there is a great perl script to help install VMWare Tools. The only modification I made was to delete –default from the line that runs the VMWare installer, without which the script will suggest an alternative option and abandon the installation.
I made a YouTube video of my virtual machine installation and VMWare tools configuration:
The script for VMWare tools:
#!/bin/bash
sudo apt-get update
sudo apt-get upgrade
echo "Do go and mount your cdrom from the VMware menu"
echo "press any key to continue"
read -n 1 -s
mount /dev/cdrom
cd /media/cdrom0
cp VMwareTools-*.tar.gz /tmp
cd /tmp
tar xvzf VMwareTools-*.tar.gz
cd vmware-tools-distrib/
sudo apt-get install --no-install-recommends libglib2.0-0
sudo apt-get install --no-install-recommends build-essential
sudo apt-get install --no-install-recommends gcc-4.3 linux-headers-`uname -r`
sudo ./vmware-install.pl
sudo /etc/init.d/networking stop
sudo rmmod pcnet32
sudo rmmod vmxnet
sudo modprobe vmxnet
sudo /etc/init.d/networking start
I had a Windows 10 VM, managed using VMWare Workstation Pro 12. The VM was originally created with the default 60GB hard disk.
I needed to expand the disk, so I shutdown the VM, removed all snapshots, expanded the virtual HD to 120GB, and rebooted the VM. The plan was to use Windows 10’s disk management tool to expand the original partition and merge in the new partition.
But the recovery partition was sandwiched between the original and new partitions, and couldn’t be deleted using the Disk Management tool:
I found the basics of how to fix this on the VMWare knowledge base. I’m adding my procedure here because it includes some useful screenshots.
Steps
I ran diskpart to work with partitions. (I ran as admin, but don’t think it was required.)
From the DISKPART shell, I then used the following to select and then remove the unwanted partition:
DISKPART> list volume
Volume ### Ltr Label Fs Type Size Status Info
---------- --- ----------- ----- ---------- ------- --------- --------
* Volume 0 D DVD-ROM 0 B No Media
Volume 1 C NTFS Partition 59 GB Healthy System
Volume 2 NTFS Partition 450 MB Healthy Hidden
DISKPART> select volume 2
Volume 2 is the selected volume.
DISKPART> delete partition override
DiskPart successfully deleted the selected partition.
The Disk Management window showed the new partition layout:
Next, I right-clicked on the C: partition and chose ‘Extend Volume’: