Wednesday, 20 December 2006

How to misuse the Office 2007 Ribbon

Dare Obasanjo has noticed a comment of mine on Jensen Harris’s post announcing Microsoft’s licensing of the concept of the Office 2007 ‘Ribbon’ UI. In that comment, I criticised (in a single sentence) Dare’s concept for a future version of RSS Bandit. I should say up-front that I’m a regular user of RSS Bandit; it’s my main RSS reader at home, in which I’m subscribed to over 100 feeds. I want this to remain usable, and my fear is that it won’t be.

Funnily, he doesn’t acknowledge that I made the first comment on that post, in which I go into detail. I said:

It doesn't belong. There's no need to go to an Office-style menu system in RSS Bandit because you barely ever use the menus anyway. It's not like there are loads of features hidden in the depths of the menus and dialogs, and the gallery is particularly over-the-top. How often do you think people will change the style of the newspaper view? Virtually never, in my opinion - they'll pick one that works, and stick with it. These options don't need to be 'in your face' the whole time. RSS Bandit is not document authoring software, it's a browser.

If anything you could follow IE7's lead and drop the menu bar entirely. There aren't that many menu options, and most of them are replicated with some other widget, on one of the toolbars, or in the case of View/Feed Subscriptions and View/Search, the two tabs in the pane.

Most of the other options that aren't duplicated could end up on an extended Tools menu.

Dare links to Mike Torres who comments on the menu-less UI of various Microsoft applications, suggesting that this is something recent. At least two of these have been menu-less for a while, in one case for five years: Windows Media Player. The original version of WMP in Windows XP was without menus:

Windows Media Player for Windows XP (WMP 8)

(screenshot from

The highly-unconventional window shape was toned down in version 9.0 and became virtually conventional in 10.0, although all four corners are rounded whereas the normal XP themes have rounded top corners and square bottom corners.

It appears that the menus first disappeared from MSN Messenger in version 7.0, which was released in April 2005:

MSN Messenger 7.0

(screenshot from

Which Office application is RSS Bandit most like? Word? Excel? No. It’s most like Outlook. Which major Office 2007 application does not get a Ribbon (in its main UI)? Outlook.

I’ve been following Jensen Harris’s blog more-or-less since the beginning. In it, he explains the motivations behind creating the Ribbon, and the data that was used to feed the process of developing it. The Ribbon is mainly about creating better access to creating and formatting documents, by showing the user a gallery of choices and allowing them to refine it. Which part of Outlook gets a Ribbon? The message editor (OK, this is actually part of Word).

RSS Bandit is about viewing other people’s content, for which the best analogy is probably IE7.

I haven’t done any UI studies. I’ve not taken part in any. But Microsoft have analysed their UIs. They’ve gathered data on how those interfaces are used – automatically, in some cases (the Customer Experience Improvement Programs). The Ribbon is an improvement for Office. It’s not going to be right for all applications. Many applications actually suffer in the classic File/Edit/View/Tools/Help system: the menus tend to either be padded with commands that are duplicated elsewhere, or are ridiculously short (e.g. RSS Bandit’s ‘Edit’ menu which only has a ‘Select All’ option, which if you’re currently looking at a browser tab appears to do nothing – it’s only when you switch back to the Feed tab that you notice it’s selected all the items in the current feed or feed group). They’ll suffer equally in the Ribbon, particularly if there are too few features to make a Ribbon worthwhile.

When designing a UI for your application, don’t be too slavish to a particular model. If you find yourself padding out the menus to conform to the File/Edit/View model, or if all your commands are on the Tools menu, a classic menu probably doesn’t fit. If you’re not offering a feature for the user to customise the formatting of something, which the user will use regularly, a Ribbon is probably also wrong. The standard toolbar is probably enough.

Tuesday, 19 December 2006

Another knock-on effect of the stupid WinFX->.NET 3.0 naming decision

The next version of the Compact Framework will be called:

.NET Compact Framework 3.5.


Great way to confuse people.

Monday, 4 December 2006

Spotted: bad Google ads

Saw this strip of ads on a site: in an advert?

Test advert got onto a real site, perhaps? I know the site owner has only just added AdWords to their site, but it should show something useful!

( is reserved by IANA for use in example URLs)

Spotted: bad Google ads

Saw this strip of ads on a site: in an advert?

Test advert got onto a real site, perhaps? I know the site owner has only just added AdWords to their site, but it should show something useful!

( is reserved by IANA for use in example URLs)

Friday, 3 November 2006

How to fix the Smart Device Framework 2.0 installer

Neil Cowburn noted that the Smart Device Framework 2.0 installer doesn’t work properly on Windows Vista.

This is the comment I couldn’t post to his website:

“It's error upon error for this one. Code 2869 means that the dialog designated as an error dialog doesn't work how Windows Installer needs an error dialog to work - see So the real error is being lost. Visual Studio is generating you a broken Error dialog.

I'm going to guess that the real error is that your custom action is failing, because it isn't privileged. On Windows Vista, only custom actions marked NoImpersonate get to run with administrative permissions (actually, they run as LocalSystem). Visual Studio cannot be told to mark a custom action as NoImpersonate (as far as I know). If you want to fix it after generating the MSI, you can use Orca (the MSI table editor, part of the Platform SDK, search for Orca.MSI) to add 2048 to the Type column of the three rows which use the InstallUtil DLL (which is the native code that calls into your managed DLL). I've also heard of tools which can be used to execute SQL against an MSI - it should be possible to do this with VBScript using the MSI object model.

The Windows Installer team does not recommend the use of managed code custom actions. This message does not seem to have got through to the Visual Studio deployment team. The recommendation is to use as few dependencies as possible, which generally translates to statically-linked C++ code.

Digging around in Reflector shows that you're using the custom actions to add the SDF to the ActiveSync Add/Remove Programs box. I'm not really a fan of this idea - and I note that Microsoft doesn't do this with the Compact Framework itself. It would be simplest to scrap this custom action completely. I also note that you're not handling rollback or uninstall. You should also use the /register flag to CEAppMgr.exe so that it doesn't install immediately on the connected device (or install when the next device is connected).

Windows Installer does support finding and executing an EXE that's already on the system as a custom action, but I don't think you can do this in Visual Studio.

You might want to consider a better installation solution, such as Windows Installer XML (WiX,”

I’ve been getting into WiX recently. I was going to do a presentation at DDD4, but not enough people voted for it. If you fancy attending any of the proposed sessions and can spare a Saturday, sign up now. (I’m waiting for the final agenda to be posted, but the places may all go before that happens.)

Wednesday, 20 September 2006

Petition to rename .NET Framework 3.0

As soon as I heard that Microsoft were changing the name WinFX, an umbrella name for Avalon, Indigo – oh, excuse me, Windows Presentation Foundation and Windows Communication Foundation – and Windows Workflow, to .NET Framework 3.0, I thought it was an incredibly bad idea.

The trouble is that it confuses everybody. I’ve seen people commenting that they’ll delay moving to .NET 2.0 ‘because .NET 3.0 is just around the corner.’ They then get horribly confused – and normally angry – when you tell them that the CLR, BCL, Windows Forms, ASP.NET and the language compilers are completely unchanged in ‘.NET 3.0’ from .NET 2.0.

Someone’s started a petition to name it back to WinFX. I don’t care what name it has – does it even need an umbrella name? Can we not call the three subsystems by their own names? Even better, their codenames which despite not being descriptive were at least easy to say! Do I really need to even install WCF and WF just to get a WPF application to work?

What I suspect it does mean is that versions of .NET after 3.0 simply won’t install or work on Windows before XP SP2, Server 2003 SP1, or Vista. That’s a huge compatibility loss – .NET 2.0 works right back to Windows 98 and NT 4.0. Or, if new versions of the CLR and BCL will install and work on older operating systems, they’ll have another stupid naming decision to make.

It also means that even for downlevel systems, the new installers will be even more humungous than ever for the One That Is To Come After. People still complain about the size of the Framework installer; most end users will never have a web server installed on their machine – security considerations would suggest that they shouldn’t – so why in hell does .NET Framework include and install ASP.NET on every single box? This leads to people asking about and trying to invent jerry-rigged systems to either try to link the framework into their binaries or ship only bits of the Framework. It’s a recipe for disaster come servicing time.

Please, if you value everyone’s sanity, sign this petition. It probably won’t do any good but you can at least say you spoke up against the insanity.

Tuesday, 19 September 2006

Biometric scanners not particularly reliable

Dana Epp posted a movie from Mythbusters cracking a fingerprint ‘lock’.

Not exactly secure.

Watch now. (YouTube, may get taken down when someone spots the copyright violation. What the hell, it’s Talk Like A Pirate Day. Arrr!)

Sunday, 10 September 2006

Missing font on Vista RC1

Anyone reading this blog from Windows Vista (Pre-RC1 build 5536 or RC1 build 5600) might notice a slight difference in appearance between Vista and XP. What is it?

They forgot to include Trebuchet MS Italic!

Windows Vista instead installs two copies of Trebuchet MS Bold Italic. When called upon to produce Trebuchet MS Italic, the Windows TrueType/OpenType renderer instead simply slants a copy of Trebuchet MS. This doesn't look very good - there's a reason that Vincent Connare drew a true italic.

I remember reporting a bug on Pre-RC1, but since I can't access the feedback site (being part of the Customer Preview Program rather than a 'beta tester', a differentiation that seems a little bizarre – do Microsoft not want bug reports from CPP members?) I don't know if anything's being done.

Perhaps Michael Kaplan could see what's happening here (although he does use Tahoma 'Italic' on his blog ;-) – there is no true italic for Tahoma).

Wednesday, 30 August 2006

VS6 family completely broken on Vista Pre-RC1 (or so it looked)

Looks like my compatibility problems are solved: in the radical way, by completely breaking Setup on all three applications. Setup now crashes on clicking 'Next' at the welcome screen. No Program Compatibility settings work.

These tools are essential for my work. If they don't work I cannot upgrade.

UPDATE 2006-08-31: It appears that whatever was causing this may have been a temporary glitch; Visual Studio 6.0 is now installing. Still, given my experiences with eVC before, I can perhaps be forgiven for jumping the gun?

I can't go in and add a note to the bugs I filed, because as a Customer Public Preview user, while I can submit bug reports using the Beta Feedback tool, I can't log on to Connect to make changes or additional comments. This means I'm wasting someone's time to triage bugs that I now cannot repro.

Tuesday, 29 August 2006

People exhibit surprise that Windows Media DRM is 'cracked'

Example surprise.

I’m not at all surprised this is possible. In order to decrypt data, you need two things: the encrypted data, and the decryption key. In order for media playback of DRM-protected files to be possible while disconnected from the Internet, both of those things need to be on your PC. If the key is already on the attacker’s PC, it’s only a matter of time before they find out where it is.

There are of course things that can be done – such as encrypting the decryption key with a master encryption key, so that it isn’t on disk in a usable form, then decrypting it only while it’s actually needed for playback – but ultimately, the key will be visible in the system’s memory somewhere for long enough to be copied.

Saturday, 26 August 2006

TV Licensing has serious issues

(Note for non-Brits reading this [if any]: in the UK we are required, if we want to watch broadcast television, to pay a licence fee of £131.50, which goes to support the BBC – BBC TV has no commercial advertising, except for other BBC programmes. You have to show that your TV is physically incapable of receiving broadcast television to avoid it.)

When I moved into this flat, I bought a TV licence using the TV Licensing website. I did not notice at the time that it had ‘auto-corrected’ the address I entered. This house, an early-20th-century end terrace, was split into two flats by the landlady in 1999 (based on the details from the Council Tax website). The landlady, and all the rental documents, refer to my upstairs flat as ‘17A’. However, the council, for council tax, refers to it as ‘First Floor Flat’. (Actually looking at it right now, the Council Tax website I linked above shows ‘1st Flr Flat’ – they clearly also have a stupidly short field length.)

When I moved my credit cards, I wasn’t aware of the council’s designation, so I used ‘17A’, and that’s what I entered on the TV Licensing website as well. Most UK websites have a gazetteer – a lookup of house number and postcode to pick the correct full address, and this one is no exception. It expanded the street name and town correctly, but dropped the ‘A’, so my licence is actually for number 17, which according to the council, no longer exists. The landlady calls the downstairs flat number 17.

Earlier this year, I decided to get a PVR (Humax PVR9200T, very good thanks). I ordered it with my credit card, and as usual when buying from a new supplier, they insisted it was sent to the card address (i.e. 17A). Whenever you buy TV equipment, this is reported to TV Licensing. Ever since then, I’ve been getting demands to buy a licence for 17A – which I can’t, because the website won’t accept it!

I’ve tried to change the address. You can still edit the address after it’s been expanded out on the change of address page, and I’ve tried that, but the ‘A’ is still dropped.

I’ve sent them letters. They’ve ignored them.

I’ve tried to phone them. They have an automated change of address system. It’s unusable. I’ve tried to leave a phone message. You get about 30 seconds, which is far too little to actually explain the problem. I’ve asked to be phoned back – they haven’t. I’ve tried to be put through to an agent – I just get disconnected.

I’ve sent emails through their website. They’ve been ignored or lost.

What’s actually almost more annoying is the lackadaisical attitude they’ve taken. I bought the PVR in February. I guess when I haven’t had a reply to one of the many attempts to contact them and get this corrected, I’ve been overoptimistic and assumed that they’d sorted it, whereas silence actually means I’m being ignored.

Today I’ve sent two more emails, one using the contact form and the other directly to the email address shown. Hopefully one of them will be processed this time, before the bailiffs come round.

I can’t even try to buy another licence (I’d lose about £40 because this licence still has four months to run, but that’s worth less to me than all this hassle), because I still can’t enter the correct address!

Thursday, 24 August 2006

Generic components can only get you so far

We had a strange issue with Meteor Server about two years back. Under stress, the Mem Size column (working set size, in fact) in Task Manager would be up and down like a yo-yo. I initially wondered whether the OS was trimming the working set too aggressively, and tried using the SetProcessWorkingSetSize function to increase the quota. Result: no improvement, it was still happening. The time spent in the memory allocator was causing the server to slow down significantly, and as it started to slow down, the problem would get worse, and worse, eventually virtually grinding to a halt.

To prevent overhead of context switching between multiple runnable worker processes, we moved a long time ago (before I started working on it) from a model where each client would have a dedicated worker process, to a much smaller pool of worker threads (the old mode can still be enabled for compatibility with older applications that don’t store their shared state in our session object or otherwise manage their state, but it is highly discouraged for new code). This does mean that there will be times where a client request cannot be handled because there is no worker process to handle it.

After some thought and experimentation, it became clear that what was happening was that when the server started to slow down, the incoming packets were building up in, of all things, the windows message queue. I should say at this point that we were using the Winsock ActiveX control supplied with Visual Basic 6 for all network communications. We already had a heuristic that would enable a shortcut path if the average time to handle a request exceeded a certain threshold. This shortcut path simply wasn’t fast enough.

To work around the problem, I added code that would actually close the socket when either of these conditions held. This was pretty tricky to get right as we had to reopen the socket in order to send a response out of it, and we would then need to close again if the average time still exceeded the threshold. There was at least one server release where the socket would not be reopened under certain conditions (if I recall, when both the time threshold was exceeded and a worker process became available). The memory allocation issue still occurred, but it was contained. I added an extra condition that would also close the socket if no worker process was available (this would prevent some retries from lost responses and some requests for additional blocks, both handled in the server process without using a worker, from being handled).

Then, recently, we discovered a problem with the code used to send subsequent packets of large responses, too large to fit into a single packet (the application server protocol is UDP-based). We weren’t setting the destination (RemoteHost and RemotePort properties) for these packets, assuming that this wouldn’t change. Wrong! If another packet from another client arrives (or is already queued) between calling GetData and SendData, the properties change to the source of the new packet. This sometimes meant that a client would receive half of its own response and half of a different one, which when reassembled would be gibberish (typically this would cause the client to try to allocate some enormous amount of memory, which would fail). I corrected that, but found that the log in which we (optionally) record all incoming and outgoing packets still had some blanks in it where the destination IP and port were supposed to be – these values retrieved from the RemoteHostIP and RemotePort properties. Where were these packets going? Who knows! Perhaps they were (eek!) being broadcast?

The WinSock control really isn’t designed to be a server component. Frankly it was amazing we were getting around 2,400 transactions per minute (peak) out of it. It was time to go back to the drawing board. Clearly I was going to need an asynchronous way of receiving packets, and the Windows Sockets API really isn’t conducive to use from VB6, so it was going to be a C++ component. Since string manipulation and callbacks were involved, I went with a COM object written with ATL.

I surmise that the WinSock control uses the WSAAsyncSelect API to receive notifications of new packets, and that’s why we were seeing the message queue grow with each packet received. The new component uses WSAEventSelect and has a worker thread which waits on the event for a new packet to arrive. When a packet arrives it synchronously fires an event, which has the effect of waiting until the server finishes processing the packet – either discarding it (as a duplicate, otherwise malformed, or due to excessive load), sending the next block in a multi-block response, or handing the request off to a worker process.

This does mean that there could be long delays between checking for packets. Doesn’t that cause a problem? Not really. The TCP/IP stack buffers incoming packets on a UDP socket in a small First-In-First-Out buffer. If the buffer doesn’t have enough space for an incoming packet, the oldest one in the buffer is discarded. That behaviour is perfect for our situation. You can vary the buffer size (warning, it’s in kernel mode and taken from non-paged pool, IIRC) by calling setsockopt with the SO_RCVBUF parameter.

For added performance the socket is in non-blocking mode, so on sending a packet, it simply gets buffered and the OS sends the data asynchronously.

Net result? No more problems with misdirected packets (my new API requires you to pass the destination in at the same time as the data), a step on the road to IPv6 support (the WinSock control will not be updated for that) – and a substantial performance improvement. My work computer now does 7,000 transactions per minute (peak) on the same application – and the bottleneck has moved somewhere else, because that figure was achieved with only three worker processes while the earlier one was with eight. (Hyperthreaded P4 3.0GHz). We saw much less difference in a VM (on the same machine) I’d been using to get performance baselines, but what we did see was that the performance was much more consistent with the new socket component.

The sizing for this application was previously around 1,500 transactions per minute per server, so this really gives a substantial amount of headroom.

My component would be terrible for general use – but it’s just right for this one.

Wednesday, 23 August 2006

Sometimes where code runs is more important than what it is

Sometimes, to get the best performance from some code, you have to change the architecture.

Our application server product, Meteor Server is a complex beast. To be able to handle requests from clients concurrently, the main MeteorServer.exe process farms out those requests to a pool of worker processes. (Yes, we could switch to a single multi-threaded worker process even with VB6, but it’s a lot of effort and many existing applications may not be threadsafe, so we’d have to offer both schemes, and that’s even more effort.)

We can’t multi-thread the main MeteorServer.exe process because it’s written in VB6, and while you can make an out-of-process COM server process (a local server in COM parlance, an ‘ActiveX EXE’ in VB6 terminology) multi-threaded, you can’t make a ‘Standard EXE’ multithreaded. Oh, there are hacks, but I’m of the firm opinion that you shouldn’t subvert a technology to make it do something it wasn’t designed to do – when it goes wrong you will get no support.

A Meteor application is a COM object which exposes two methods through a dispatch (Automation) interface – well, one property and one method. The property, called VersionString, is simply to allow Meteor to pick up and display version information for the application. Every other piece of interaction is done through the TerminalEvent method, which receives a couple of interface pointers to allow it to call back into Meteor, a flag indicating whether this is a new client, a numeric event type indicating what the user’s last action was, and a string representing any event data. The application then calls methods on the interface to accumulate a batch of commands to be sent to the client – things like clearing the screen, setting the text colour, displaying text at a given location, sending a menu of options, defining an entry field. When one of these methods is called, it’s turned into an on-the-wire format, with an operation code and a wire-representation of the parameters. When the application returns from TerminalEvent, the server sends the complete batch to the client.

When I first started working on Meteor Server, when the application called a command-generating method, the stub of code in the interface made a call back into the MeteorServer.exe process to perform the wire-format translation. This meant it had to wait for the server process to finish whatever it was doing and go back to waiting for a window message. This made the server process a serious bottleneck – it was a ‘chatty’ interface, which is really not advisable across process boundaries. About three years ago, I looked at the code and realised it actually had no dependencies on any data in the server process, and had the idea to move this formatting code into the worker process that the application object was running in, to improve both performance and scalability. The commands would be batched up in the worker process then only sent across to the server when the batch was complete.

About a year later I actually made the change – we were seeking a significant performance improvement at the time. I don’t have a record of the performance change but I think it was some decent multiple – 3x or so the transaction rate.

About this time last year, or a little before, I was asked to add a new feature. Meteor provides a session state storage object which can store arbitrary strings that the application sets. The new feature was to allow an application to copy the session state data from another session to its own – this allows a user to resume their work on a different client, for example if the hardware is damaged or otherwise fails. I initially put the extra code directly in the batch-retire method that the worker process calls on completing a request, adding a new parameter, but when testing for performance, discovered that the simple test to see if the session should be transferred caused a regression of about 10%, and that would mean the difference between exceeding and failing to meet the performance requirement for a different customer.

The solution was to make this a separate method that the worker process would call if required, and to take it out of the mainline, returning the batch-retire method to its previous implementation. On doing this, the regression had gone – we were back up to almost exactly the same performance level we’d had before.

Know the environment in which your code has to run.

You must call Dispose

You must call Dispose.

If an object implements the IDisposable pattern, or otherwise offers a Dispose or Close method, you must call it. No exceptions. If an instance member of your class implements IDisposable, your class should too.

OK, if you don’t, a finalizer might clean up after you. But you don’t want it to do that. The finalizer will only run once a GC has occurred. And a GC only runs based on heuristics of how much memory has been allocated (in general; there are a few cases in Compact Framework where the GC can be spurred to run, for example on receiving a WM_HIBERNATE message from the shell to say that physical memory is low). In .NET Framework 1.x and both versions of .NET Compact Framework, the GC has no idea how much unmanaged memory is being used by any object that manages an unmanaged resource. .NET Framework 2.0 does have the GC.AddMemoryPressure method, which can guide the GC to collect earlier than it might otherwise have done.

Finalizers don’t run on a regular application thread. They run on a special finalizer thread. This means you have to be careful around possible synchronisation issues. Objects to be finalized wait in a queue, and only one thread services that queue, so if a finalizer blocks, all undisposed objects will end up hanging around.

Once the finalizer has run, the managed memory isn’t automatically released. You have to wait for GC to run again. On the desktop or server, with the full Framework, you have to wait for it to collect the generation of the heap which the object is now in, which means an even longer wait for the managed memory to be released, which can keep the GC heap larger than it could have been.

The GC propoganda basically tells us we can be lazy. We can’t. We must clean up after ourselves. Treat the finalizer as a safety net.

Best practice is to manage one unmanaged resource with one managed object, and keep that managed object as simple as possible – ideally, just to manage that object. Make your resource manager class implement IDisposable and give it a finalizer as a backstop.

Sunday, 20 August 2006

Anyone managed to get eVC working on Windows Vista Beta 2?

My job involves a fair amount of C++ development for Pocket PCs and other Windows CE-based devices. I’ve been doing this for five years as of next week, so in that time I’ve used – and deployed projects developed with – eMbedded Visual C++ 3.0 with the Pocket PC 2000 SDK, the Pocket PC 2002 SDK, eMbedded Visual C++ 4.0 with Pocket PC 2003 SDK and now even a couple of things debugged with VS2005 (note: not compiled with VS2005 because they needed to be backwards compatible with PPC2003 and not redistribute the MFC 8.0 runtime – you can’t use the MFC 6ish supplied with the PPC2003 SDK with VS2005 because the headers won’t compile). To be able to upgrade to Windows Vista, compatibility with eVC is an absolute requirement – if it doesn’t work, I can’t upgrade.

(What about running under Virtual PC? Virtual PC doesn’t virtualise USB ports, new versions of ActiveSync don’t offer network sync, I don’t really fancy manual connection setup, and I can’t always rely on having devices with network support.)

Unfortunately it seems at the moment that eVC just doesn’t work under Vista. eVC 3.0 works with the Pocket PC 2000 SDK (only) installed, but as soon as you install the 2002 SDK (which shipped Platform Manager 4.0) it breaks, taking an access violation exception on opening a project. That’s installing with User Account Control enabled. With UAC disabled before installation, it doesn’t even get as far as an empty environment after installing the 2002 SDK.

eVC 4.0 barely even installs. If you launch Setup from the Program Compatibility Wizard, after setting Windows 2000 compatibility, it does install; using the default compatibility options (i.e. none) it crashes before even asking for the product key. I did find that a scripted build (using EVC /MAKE from the command line) would build for some platforms but not others, but again would crash on opening a project, making it impossible to debug. I’m guessing that UAC gets in the way of installing the SDKs to the right place.

This is a shame. I’d hoped that UAC would help with the myriad problems in trying to get eVC (either version) to work under a limited user account.

On Windows XP, to get them working as a limited user, both versions require users to have write-access to their installation directories. This stems from the Access/Jet database used to hold the processor type definitions, VCCEDB.MDB. Write access to this file is needed when running, and if you start more than one copy, Jet needs to create a ‘lock’ (.ldb) file in the same directory to manage concurrency. There are also registry keys which contain platform definitions under HKEY_LOCAL_MACHINE\Software\Microsoft\Windows CE Tools\Platform Manager, which contain things like the paths under Tools, Options, Directories. If eVC cannot write to this key, it creates an equivalent tree under HKEY_CURRENT_USER, but does not copy the original data, leaving you with a non-functional SDK. You have to copy the settings (maybe I should write a program to do this). I recall that there are other settings that you have to change the permissions on, but can’t recall what they are right now – it’s a while since I last had to do it.

eVC 4.0 also complains that it is unable to update its help system after you install any new SDKs. Running from a command prompt launched by makemeadmin doesn’t help, nor does running eVC as an administrator with Run As.

Somewhat related, VS.NET 2003 has a problem with device debugging as a limited user where it will fail to connect to a device that hasn’t been used for debugging before. You have to run as an admin (via makemeadmin) the first time you connect to a device (or the first time after cold booting). I surmise that some part or parts of the encryption keys are stored in a location that isn’t writable by a limited user.

Saturday, 22 July 2006

Avoid interactive services

It’s common, when thinking about providing configuration UI for a Windows service, for the developer to see the ‘Interactive’ checkbox in the service properties dialog. Or, maybe you have a server program which already has UI, which you’d like to turn into a service, and you need that UI to be available to the user.

Don’t be tempted.

Interactive services must run under the SYSTEM account (sometimes shown as LocalSystem). Well, OK, it’s not really a user account (you won’t see it in the user accounts database, it doesn’t have a password, it’s just assigned by the system to the special processes it creates at boot time like CSRSS and WINLOGON) but it behaves a lot like one. Anyway, the point is that it’s very highly privileged, and you should strive to have your services run with only the minimum privileges you need, rather than the maximum available. You cannot create a lower-privileged interactive service.

Interactive services also only work for session 0. On Windows 2000, XP and Server 2003, session 0 is the one which the user logs onto on the physical console – the actual keyboard, mouse and graphics hardware physically plugged into the machine. In Fast User Switching on XP, the first user to log on gets session 0, subsequent users get their own sessions. Windows XP Professional will remote session 0 if the user currently logged on locally to session 0 logs on remotely, or if no user is currently logged on. Windows Server 2003 will only remote session 0 if you pass the /console switch to mstsc.exe (Remote Desktop Connection). Windows 2000 never remotes session 0.

Users with other sessions never see the interactive service UI. If you’re trying to notify an administrator of a problem, they may not see the problem until physically logging on at a console, but the trend is towards remote administration. I have to walk all of probably 20 metres to administer servers at work, but it’s more convenient to do it at my desk, so I use Remote Desktop, when there isn’t a tool I can install on my workstation (for example, I’ve installed the Exchange Server 2003 management tools on my computer, and use RunAs to give myself appropriate privileges when using them).

On the current generation of operating systems, interactive services can be vulnerable to so-called “shatter” attacks, where an attacker sends window messages to the service to cause the service to execute code on his behalf. To mitigate this, Microsoft are making a change for Windows Vista and its successors including the next Windows Server version. Now, no users will log on to session 0 – it will be reserved for privileged processes only. Instead, the first user will log on to session 1. Since processes cannot send window messages across sessions, the shatter attack can no longer work. But this means that now no-one will see your UI. For more information, see the whitepaper “Impact of Session 0 Isolation on Services and Drivers in Windows Vista”.

So what to do? You will have to come up with some method of inter-process communication to allow a separate UI process to send commands to the service and display results. You might as well make it a network-capable IPC mechanism – then you can make your UI process capable of running on another machine.

Sunday, 16 July 2006

Warning: Installing VS 6.0 on Vista Beta 2 leaves vulnerable Java VM

I still need to use Visual Studio 6.0 at work – we have a lot of legacy software, and even some actively enhanced software, which uses VB 6.0 and Visual C++ 6.0. The installer for Enterprise Edition requires Microsoft’s Java VM to be installed and won’t continue without it. The version installed dates from 1998 (as does VS 6.0, of course) and is Build 2572.

Windows Update does not offer any updates for this VM on Vista Beta 2. There is no other source of updates. There were many security patches for this software.

When you have finished installing Visual Studio 6.0 you should immediately uninstall the VM.

You should also of course install Visual Studio 6.0 Service Pack 6. Some users have reported problems installing VS6 SP5 but I had no problems with SP6. Well, except that Windows Vista popped up a compatibility box claiming that the software hadn’t installed correctly, but it seemed to be OK.

If you need Java you should install Sun’s runtime environment (at time of writing version 5.0 Update 7). Note that when you have the JRE loaded, the Aero environment will shut down and you’ll revert to the Basic theme – no ‘cool’ 3D or glass effects – until the process that launched Java shuts down. This is because the JRE uses DirectX in an exclusive mode to do its drawing. This is supposed to be fixed by Vista RTM (source).

[Update: I have discovered that you can fake it by simply placing an empty file called msjava.dll in your System32 directory (presumably SysWOW64 on an x64 box). This also works for machines installed from clean with Windows XP Service Pack 2, which doesn’t include the Microsoft VM, although Windows Update does offer updates for the Microsoft VM after installing from the VS 6.0 CD, on XP.]

Tuesday, 23 May 2006

Don't bother using Google to search for Windows APIs anymore

I’ve tried several times over the last week or so to look up details of an API, given its name, using Google (using a search engine in the web browser is still faster than waiting for the documentation browser to load) and have found that the official documentation (that is, has just plummeted off the list.

MSN Search still brings up the API documentation usually as the first or second result (often, if it’s a routine that Windows CE also implements, it’s the first and second result). In case you think this is a quirk or that somehow Microsoft are blocking Google’s spider, it’s interesting to note that MSDN is usually the top hit on Yahoo search as well. Of course that doesn’t prove that Microsoft aren’t blocking Google’s spider.

If, like me, you’re using IE 7 Beta 2, you can use the MSDN Lab search (which uses MSN Search under the covers) directly from the browser search box by going to and clicking the “Got IE7? – get our search” link in the bottom right-hand corner.

Sunday, 2 April 2006

A question for the mobile networks: when are you going to deploy EDGE?

Mobile data communications is starting to become a major business tool, for many logistics and distribution companies, and businesses who have a logistics or distribution component. Instead of downloading a whole batch of work to a handheld or other mobile computer, then uploading the results of the whole batch at the end of the working day, the business can get more timely information by having the mobile send its results as work is done. The business can also expose some of this information to its end customers, for applications like live package tracking. A system can even send live data updates of additional work required to the mobile computer, reducing requirements for the user to manually input job information.

For timely updates and responsive applications, it helps to have fast transfer speeds. The last generation of enterprise mobile handhelds, at least here in the UK, supported integrated GPRS. The new generation just coming on stream support EDGE – Enhanced Data rates for GSM Evolution (hmm, smells like an invented name to me). EDGE offers more bandwidth than GPRS, but is entirely compatible with GSM using the same basic radio format, unlike UMTS (3G). This enables existing GSM networks to be upgraded to support the higher data rates.

Unfortunately, on the whole, the UK networks haven’t. They’ve concentrated on 3G. Orange have recently announced some EDGE support, while (as far as I know) the others have made no such announcements.

The networks spent a huge amount of money on 3G licences, and the customers basically haven’t turned up (ok, Ian, I’m excepting you and your N90!) It shouldn’t matter where they recoup this investment – hell, it’s now a sunk cost. Recouping from the mass market of existing 2G customers, to me, makes more sense than trying to charge huge premiums on UMTS.

Adding EDGE would also improve bandwidth for 3G users outside the 3G coverage area, probably at a lower cost than expanding the 3G network, assuming that the user’s equipment also supports EDGE.

Now, I wonder if someone could ask this question for me?

Saturday, 1 April 2006

Race condition in eVC linker?

I’m not sure exactly why this happened. Yesterday I was trying to do a batch build for one of our most complex components – our Barcode Scanner Hardware Abstraction Layer (ScanHAL). This is a set of libraries which implement the same interface, to make our applications hardware independent with respect to the barcode scanner, making it possible to run more-or-less seamlessly on HHP Dolphin, Intermec and Symbol hardware, as well as on handhelds with no built-in barcode scanner (there’s a stub implementation). The clever bit (heh) is that all of the different libraries are included in the same CAB package, with a setup DLL which probes to decide which library is correct for the handheld it’s running on.

Anyway, the release process involves a batch build of everything, which rebuilds everything from source. For historical reasons the projects still support Pocket PC 2000, which ran on a bunch of different processor types (ARM, MIPS, SH3, x86 emulation). The build process currently builds both debug and release builds. For one particular customer, we made available a release build with debugging information, stripped of private symbol information using a tool by John Robbins called PrivateStrip – this means that they can tell us which function the program crashed on, if it does, but can’t easily disassemble the library. So in all there are about 20 different configurations of different libraries that need to be built.

Unfortunately since Wintellect reorganised their website, PrivateStrip is no longer available. I hope they’ll reinstate it.

Yesterday, the linker was repeatedly hanging in the last stage of building DLLs: it had just output the message about building the .LIB and .EXP files, but wasn’t completing the build. On a couple of occasions it did this on the last DLL – but it wasn’t consistently on any given DLL. My work machine is hyperthreading-capable, and this was happening with HT enabled. Turning HT off enabled me to complete the build.

I don’t know whether this would also happen on a dual-core or other multiprocessor machine.

You might need to be careful if you’re using older tools (eVC 3.0 and 4.0 both use modified versions of Visual C++ 6.0’s LINK.EXE) on a computer with more than one logical or physical processor.

Friday, 24 March 2006

Best line from Hustle tonight

Tailor: “Do you dress to the right, sir?”

Danny: “No, I always vote New Labour.”

Tailor: “Ah, swinging to the right it is…”


Saturday, 18 March 2006

Subtle interaction between WM_DESTROY handler and OnFinalMessage in ATL for CE

There’s a subtle issue around handling WM_DESTROY in an ATL window and also overriding OnFinalMessage, if using the ATL windowing classes on Windows CE.

Typically, you would override OnFinalMessage if you wanted to do some final cleanup when the window has been destroyed and no further messages will be received. The canonical example – given in the documentation – is automatically deleting the object that manages the window using delete this.

On the desktop, OnFinalMessage is called by the CWindowImplBaseT::WindowProc function after the WM_NCDESTROY message is received. Since Windows CE top-level windows don’t have a non-client area, the non-client messages were removed from the platform. The Windows CE version of ATL therefore uses WM_DESTROY instead.

The code inside WindowProc only cleans up the window proc thunks (dynamic objects containing code used to patch between Windows’ expectation of a flat API, and a call to a C++ object member function) and calls OnFinalMessage if the message (WM_DESTROY or WM_NCDESTROY as appropriate) is not handled. This means it will work fine if you don’t override WM_DESTROY, but what if you need to do some cleanup here?

The last parameter of a message handling function in ATL is a BOOL reference typically named bHandled. This is set to TRUE by the message map macros (ATL message map macros generate a big if statement, rather than being table-driven like MFC) on calling the message handling function. If the value is still TRUE when the function returns, the message map considers it handled and stops processing; if it’s FALSE the macro code keeps looking for a handler.

If you handle WM_DESTROY on CE using ATL, you must set bHandled to FALSE, otherwise the window will never be cleaned up, and OnFinalMessage will not be called. The same is true if you handle WM_NCDESTROY on the desktop, but since this message is rarely handled, it’s less of an issue.

This issue was responsible for me leaking 96 bytes in a program every time an activity was completed, which doesn’t sound like a lot, but any long-running application needs to be free of leaks as far as possible. This was actually a Compact Framework program for the most part – this window implemented a signature capture control, which is actually pretty hard to do in CF 1.0, and we already had working C++ code (excepting leaks…). For some strange reason, though, a leak of 96 bytes here was causing an overall leak of over 300KB of virtual memory! Removing this leak has got it down to about 160KB but I’m stumped on where the rest of it’s going – the amount of VM allocated increases by 160KB whenever the form containing the control is created, but that isn’t returned when the form is disposed. A C++ program containing the control doesn’t exhibit the same behaviour.

Compact Framework seems to do wacky things to the process default heap, which usually means that you can’t actually view it with Remote Heap Walker – it aborts after several seconds of trying to walk the heap. To find this leak I created a non-default heap, overrode operator new and operator delete to allocate and free from this alternative heap, then viewed that.

Monday, 13 March 2006

Time to move on?

I'm thinking about moving away from Blogger/BlogSpot. It's not a great fit for my content really. I'm considering either or

If you're subscribed and want to follow me if/when I do move, I've added a feed through FeedBurner which should allow me to automatically switch to whichever new service I go for. In the spirit of full disclosure I should add that it also allows me to track who's reading my feed, which I don't get at the moment from Blogger. I believe it also does content negotiation, so if your aggregator supports RSS you should get RSS while if it supports Atom you should get Atom.

Tired of cancelling ActiveSync Partnership Wizard?

If you're a developer using Windows Mobile or Windows CE devices, you may be a bit fed up with the Partnership Wizard that appears when you connect a device. If you decide to create a partnership every time, your list of partnerships grows huge, you keep having to think of a unique name, and you still have to do it every time after hard resetting the device. Alternatively, if you cancel out or opt to connect as Guest, you have to do this every time you connect. After nearly five years of mobile device development I'm sick at the sight of this wizard.

There is another option: to use an undocumented registry key to stop the wizard appearing. Open Registry Editor (as an administrator) and navigate to HKEY_LOCAL_MACHINE\Software\Microsoft\Windows CE Services. Create a new DWORD value called GuestOnly and set its value to 1. The wizard will no longer appear on connecting a device. If you ever do need to create a partnership, you'll need to set this value back to 0 before connecting.

If you already have a partnership with a device, it will still connect with its partnership, rather than as Guest, even if this option is set.

Sunday, 12 March 2006

What's the difference between x64 and IA-64?

There still seems to be a bit of confusion over the identity of 64-bit processors for Windows operating systems. Windows runs on two types of 64-bit processor, identified as x64 and IA-64.

Going with the oldest first, IA-64 stands for Intel Architecture, 64-bit and was Intel’s attempt to move to a 64-bit architecture. It remains compatible with 32-bit user-mode code through hardware emulation, but this hardware emulation performs so poorly relative to a genuine x86 that all operating systems shipping for IA-64 now include a software x86 emulation called IA-32 EL. IA-64 is the instruction set; the family of processors which implement it are named Itanium (which you’ll see critics refer to as Itanic, suggesting that it’s sinking) and so you’ll sometimes see code written for them referenced as Itanium Processor Family or IPF. You need a new operating system to run any code on an Itanium – it cannot boot a 32-bit operating system.

x64 is actually a Microsoft term and stands for Extended 64. It is implemented by both AMD and Intel who respectively call it AMD64 and EM-64T (Extended Memory 64 Technology). AMD invented it as extensions to the existing x86 architecture. An x64-capable processor natively implements x86 in hardware – it is an x86 processor, with additional modes. It boots up in the same 16-bit ‘real mode’ that an x86 does. It can run existing 32-bit operating systems. You may well have an x64-capable processor without realising it. All AMD Athlon 64 and Opteron processors, and Intel Pentium 4, Pentium D and Xeon processors built within about the last year, implement x64. To check whether your Intel processor supports x64, use Intel’s Processor Identification Utility.

Itanium had the early lead and for a while held the general performance crown, but the relentless increase in x86 clock speeds eventually had AMD’s Opteron overtake it for integer calculation performance. Itanium still leads for floating point performance but has been stuck at 1.6GHz for about two years, if I recall correctly. It’s liable to be beaten by Intel’s own ‘Woodcrest’ Xeon-replacement later this year, in my opinion, if it remains stuck at this (now relatively low) speed.

Itanium is now pretty-much reserved to big-iron datacentre servers. It’s good for highly computationally-intensive applications. If you just need lots of memory, go with x64. Itanium used to have an advantage in number of supported processors too, but IBM recently started selling the xSeries 460 which supports up to 32 dual-core Xeons. This hits Microsoft’s limit of 64 logical processors which applies to both architectures.

Saturday, 11 March 2006

Development Tools for Windows CE and Windows Mobile

This is kind of a brain-dump. I get asked this a fair bit – which tools do you need to develop for Windows CE and/or Windows Mobile? (See here for the difference.)

Native code:

For Windows CE 3.0 custom platforms, Pocket PC 2000 and Pocket PC 2002: eMbedded Visual C++ 3.0. You cannot debug on these older devices using eVC 4.0; you can’t debug CE 4.x or later devices using eVC 3.0.

For Windows CE 4.x custom platforms: eMbedded Visual C++ 4.0. At least SP1 is required for CE 4.1, SP2 for CE 4.2, latest is SP4. SP1 and SP2 were mutually exclusive – if you installed SP2 you couldn’t develop for CE 4.1; this was rectified in SP3.

For Pocket PC 2003 (alternatively Windows Mobile 2003 for Pocket PC), Smartphone 2003 (Windows Mobile 2003 for Smartphone) and respective Second Editions: eVC 4.0 SP2 or later, or VS 2005.

For Windows CE 5.x custom platforms: eVC 4.0 SP4, or VS 2005. You will get link errors complaining about corrupt debug information if you use eVC 4.0 because the platforms are actually built using version 13.1 (VS2003–compatible) compilers while eVC 4.0 SP4 can only handle debug information from version 12.0 (VC 6.0–compatible) compilers, hence SP4 only includes 12.0 compilers.

For Windows Mobile 5.0: VS 2005 only. The SDKs do not install into eVC 4.0.

For whatever device you’re building for, you need the correct SDK. However, you will find that programs are binary-compatible across different CE platforms, if the APIs required by the program are present on the platform. Using the correct SDK ensures that you build for the correct processor type and don’t accidentally reference APIs that won’t be available at runtime.

Managed code:

.NET Compact Framework 1.0 is supported for Pocket PC 2002, Windows Mobile 2003 for Pocket PC, Windows Mobile 2003 for Smartphone and Windows Mobile 5.0, and custom CE 4.x platforms. For Pocket PC 2002 you must use VS.NET 2003; for Windows Mobile 5.0 you need VS 2005 (I think). For Windows Mobile 2003 you can use either, and I would strongly recommend using VS 2005 as soon as you can stand to convert your project. This will completely rewrite your resx files. Converting the project does not mean immediately upgrading to .NET Compact Framework 2.0, that’s a separate step.

.NET Compact Framework 2.0 requires VS 2005 and only runs on Windows Mobile 2003 for Pocket PC (not WM2003 Smartphone), and CE 5.0 and Windows Mobile 5.0 devices.

VS2005 requires ActiveSync 4.1, minimum, for deployment and debugging. I was originally annoyed at the loss of network synchronisation capability, but found that, if a wireless connection is present, you can begin a debugging session over USB and continue over wireless if you disconnect from the cradle or cable.

To complete the compatibility matrix, as far as I can see CF1.0’s SqlCeClient only works with SQL Server CE 2.0 while CF2.0’s only works with SQL Mobile 2005 (SQL Server 2005 Mobile Edition). If anyone knows different let me know.

Monday, 27 February 2006

I tried so hard

I tried so hard, but I was defeated.

I was looking to buy Joe Satriani’s new album, “Super Colossal”, from a genuine UK retailer – ideally actually a UK copy. First of all, it doesn’t look like this is getting a UK release, or at least if it is, there’s no official information on it.

OK, so it has to be an import. I look on Amazon UK, it’s there, I pre-order it. About a day later I get an email – they’re cancelling the order. So now I’m buying from CD-WOW instead. Since this is Sony USA, I hope it’s not too badly DRM infested (although I am of course running as a limited user).

If you’re interested, Joe’s podcasting a little about each track, plus a one minute (or so) preview of the track. Schedule: Monday/Wednesday/Friday. If you want the videos, you’ll need QuickTime 7. I’m using Media Player Classic plus ffdshow codecs.

Tuesday, 14 February 2006

Back in time

I’ve been meaning to switch around the samples of the band I was in, back in sixth form. This is an appropriate day to do it, because the song I’ve put up was written the day after Valentine’s in 1995, and it was written because of what happened.

Song: Torn In Two (MP3, 128kbps, 7.2MB, 7:53).

The song, basically, is about Dave having a crush on a girl in his music class, and her not being interested. He sent her a card, she sent him a note, and he was pretty cut up. He wrote some lyrics that night (first verse and chorus, if I remember right), the following morning he and Roger put together a chord sequence and basic vocal melody, then that afternoon I joined them to practice. After Roger had to leave, I added the second verse, and David and I put together the third. I took some of my inspiration from my own feelings at the time (yup, more unrequited crushing) and some from the note Dave received. There are harder things to take when you’re 17 than ‘I just want to be friends,’ but not all that many.

We practiced the song for about a week, then Dave called his guitar teacher and asked if, instead of a lesson, we could record it. Our then-drummer, James, couldn’t make it, so we had Nigel, the teacher, program a sequencer with a simple drum beat. Roger and Dave played together, with me singing a guide track, to get the keyboard track into the sequencer as well. Then Dave let it all out (and boy, did he let it out) on his lead guitar track – that’s all one take I think, or it might be two. I added the vocals, then we asked Nige to add a bass track for us. There’s a little ‘fill’ bit in the bassline where the fridge motor cut in and knocked the sequencer out for half a bar! A quick mix later and we had something. I can’t recall if Dave sent the girl in question a tape or not – he may well have!

I’m not sure if we were asked, or Dave asked, to perform Torn In Two at a school concert. The girl asked us not to, but by that point Dave had got over it a bit, so we did it anyway.

Six months later we had a new drummer, Chris, and returned to Nige’s studio to record four more tracks (among them, Survivor). We asked if we could add a new, live, drum track to Torn In Two. We found the tape, which had miraculously not been recorded over, but the sequencer program was gone: Roger had to re-record the keyboards. Nige programmed in a ‘click’ track for Chris to follow, since the sequencer timing data was still on the tape. I’m still amazed at just how well Chris was able to add drum fills building to some key parts in Dave’s solos.

I love this song. It’s my favourite of the ones we recorded. Now that I have my own guitar, it’s one of the songs I practice, although I’m only playing the chords in a semi-acoustic setup.

I’ve left Survivor up for the moment; I’ve re-encoded to 128kbps to save some space and download time.

Thursday, 2 February 2006


Eric Sink has a great article “Yours, Mine and Ours” in which he discusses different types of software:

“I claim here that there are three categories of software:

  • MeWare:  The developer creates software.  The developer uses it.  Nobody else does.
  • ThemWare:  The developer creates software.  Other people use it.  The developer does not.
  • UsWare:  The developer creates software.  Other people use it.  The developer uses it too.”

Can I add WeWare to that list? I define it as MeWare but for your own development team. This gives it a slightly larger audience – requiring a touch more thought than MeWare in user interface and usability, but not really requiring the robustness or even completeness of true UsWare.

I spend a fair chunk of my time on WeWare – libraries for helping to complete a project rather than actually writing the code that solves the customer’s problem. Of course I do a lot of that too.

We’re still having trouble pushing Meteor Server over the chasm from WeWare to UsWare (from an application-development point of view, at least – there are plenty of installations where we wrote the application). We might have a couple of customers now, but it remains to be seen whether they’re able to run with it themselves.

Why you should install and enable a firewall on your PC

…even if you have a hardware firewall/NAT/whatever.

Larry Osterman has a great post “Firewalls, a history lesson,” in which he makes an analogy to the first world war. An interesting read.

I should take up this fight with my colleagues again. They all think I’m crazy for running as a low-privileged user and having the XP SP2 software firewall on, but when one of the salesmen brings their horribly-infected notebooks into the office for me to disentangle, I’m glad of it.

I remain unconvinced of the merits of a two-way firewall: the trick is not to get the malware onto your PC in the first place. Two-way firewalls are pretty annoying whenever there’s a change to the client software you use; you only have to configure an incoming-only firewall when there’s a change to the services you provide. There’s a common problem in computer security – ensuring that you don’t train the user to just click ‘Yes’ all the time. That’s why the ‘enter root password for elevation’ prompts in Mac OS X worry me, especially since there doesn’t seem to be a way for the user to validate that the prompt came from a secure subsystem rather than J. Random Malware. I’m actually happier that the initial plan for Windows Vista is that “Consent Admins” will default to being presented simply with a dialog explaining the elevation, to which you click Permit to elevate or Deny to refuse.

Tuesday, 31 January 2006

Thursday, 12 January 2006

What's the difference between Windows Mobile 5.0 and Windows CE 5.0?

I got the following question in email today:

Can you please explain me the difference in windows CE 5.0 and windows mobile 5.0? Isn't windows 5.0 is based on windows CE 5.0? If yes then why is the development tools for these are different, i mean windows CE based applications can be developed using evc++ 4.0 but for WM 5.0 based application we need Visual Studio 5.0?

Well, I’ve kind of answered this question already. The only new thing to add is that Windows Mobile 5.0 for Pocket PCs is based on Windows CE 5.1 bits according to the About screen (Microsoft using pre-release versions of Windows CE in Windows Mobile again? That caused a boatload of trouble for Pocket PC 2000 and 2002 and I thought they’d finally got over it.)

As for the development tools question, don’t ask me – ask Microsoft. The simple answer is that they didn’t generate an eVC-compatible SDK therefore it doesn’t register with eVC 4.0 therefore you can’t select Windows Mobile 5.0 as a target. As to why they didn’t do this, who knows? Perhaps something to do with the old Platform Manager, which was not the most reliable of software (understating the case severely). Also, it appears that MFC 6.0/CE and ATL 3.0/CE are not supported for new development; they don’t ship with the SDK any more, although the DLLs do ship on the device I think.

Tuesday, 10 January 2006

Two more alleged WMF 'vulnerabilities' - but there's a problem with the 'exploit'...

A number of news sites are pointing to a post on the Bugtraq mailing list alleging more problems with Windows’ handling of the Windows Metafile format.

Just a quick recap on the original issue: I originally thought that this was simply a buffer overflow issue, but in fact it appears that it’s something different – that an intended feature can be used in an unintended way. As I said last time, a WMF file contains a sequence of GDI commands. One of the supported commands is the GDI Escape function, which allows the application programmer to pass additional commands to the graphics driver, which – because GDI is a unified screen and printing API – can be a printer driver. The exploit apparently uses the SETABORTPROC escape. This escape was intended to permit GDI to call the application back, periodically during printing, to determine whether the user had tried to abort the print job. The attacker can use the SETABORTPROC escape to point to another part of the WMF file which contains code, which will be executed by GDI. It’s a case of an overlooked feature with insufficient security protection, not a failure to correctly validate the input parameters – the parameters are valid.

To the new ‘vulnerability’. Here we are dealing with a malformed file. The attacker supplies sizes for some of the parameters which are larger than the amount of data supplied. There is no vulnerability here – all that happens is that Windows tries to copy more data than is supplied. When the source pointer goes off the end of the input buffer, it may encounter an unallocated page. When this occurs, an access violation exception occurs, which, unless the application has been written to guard against it, causes the application to crash.

Note that this cannot crash Windows itself. It can only crash the process performing the file parsing. Now, in many cases this will be Windows Explorer (explorer.exe) – but Explorer should restart after a crash (it always used to – I haven’t actually had a problem in a while so I don’t recall if it still does). If an attacker put a WMF malformed in this way on a website, and the user browsed to it, the browser would simply crash. So yes, it is a denial of service, of a sort, but it’s not a serious issue.

With this information in hand, Microsoft’s response seems pretty reasonable.

Don’t believe everything you read on Bugtraq.

Thursday, 5 January 2006

WMF vulnerability patch to be released early

2pm Pacific Time today. That’s 10pm GMT according to my handy World Clock app (recently updated to not crash if you’ve disabled automatic daylight saving adjustment).

Tuesday, 3 January 2006

Thoughts on the WMF vulnerability

OK, so we know there’s a zero-day vulnerability (i.e. one which was not reported to any security organisation or vendor before being exploited) out there which utilises a malformed WMF file to execute code on a victim’s computer. This is being termed a ‘remote code execution’ vulnerability – this term is now being used to cover any situation where an attacker could cause code to be executed, but doesn’t differentiate between a situation where this can be done by the attacker actively sending data over a network to the victim, and one (such as this) where the victim must request data from the attacker. However, the attacker can cause software executing on the victim computer to automatically request the bad data – in this case, for example, by sending an email message to the victim containing a suitably malformed image file, which will cause some email packages to automatically render (draw) the image when the message is displayed.

Firstly, what is a WMF file? It stands for Windows Metafile. That’s a pretty meaningless name. What it actually contains is just a sequence of commands – that map one-to-one to GDI API calls – for producing a drawing. The easiest way to construct a WMF is to use the CreateMetaFile API which produces a GDI drawing surface, a device context, and returns a handle to it, an HDC. Once you’ve finished drawing – using the regular GDI API calls – you then call CloseMetaFile which gives you a HMETAFILE. You can then draw the metafile again using PlayMetaFile. It appears that this API is the one which contains the vulnerability – that some part of the format is insufficiently checked and the attacker can therefore cause the processor’s instruction pointer to end up pointing at a part of the supplied file.

This does suggest that any application that renders WMF data using the PlayMetaFile API could be an attack vector. Because it is such a venerable format, many applications will support it. You can include WMF drawings in your Word documents. You can process WMF files in Paint Shop Pro.

The current advisory from Microsoft suggests unregistering the shimgvw.dll component. This component is responsible for much more than WMF rendering. It performs all thumbnail rendering in Windows Explorer for all the best-known filetypes. It provides the size and other information for the Task pane and status bar. It also implements the ‘Windows Picture and Fax Viewer’ frame that appears if you click Preview on the context menu for an image. Unregistering this DLL kills all this functionality – but it does not protect against the vulnerability in other applications which call PlayMetaFile (except those which use shimgvw.dll as a proxy, such as Internet Explorer). This is my supposition, anyway – I would be astonished if shimgvw.dll did not render WMF simply by calling PlayMetaFile, and likewise Enhanced Metafiles by calling PlayEnhMetaFile.

While WMF files are most often used in the filesystem for storing vector-based clip-art (one among many other formats), you can also find them used within other formats, because of the native OS support. For example, when copying a diagram from Visio to Word, you will find that the prerendered version of the diagram (used for a linked or embedded diagram when the diagram is not active) is a metafile – although in this case it is most likely an Enhanced Metafile. Whether the Enhanced Metafile format can also be exploited is an unknown.

How to stop TlbImp requiring admin privileges

It seems that the Type Library Importer tool, TlbImp.exe, sometimes needs administrative privileges to do its job, if you’re trying to create a Primary Interop Assembly (or otherwise a strong-named reference). At work today, even admin privileges weren’t enough – I’m not sure if something’s been broken after installing Visual Studio 2005 since it used to work fine. The error given is ‘Invalid strong name parameters specified.’

What seems to be required is that the strong-name-key needs to be added to a key container in order to be used, which TlbImp is doing for you behind the scenes. The container used by strong-name functions can be either a user or a machine container. The default seems to be a machine container (at least that’s been the case both here at home and at work).

You can switch to a user container using the following command from a Visual Studio command prompt (either from the Start Menu’s Visual Studio group, or by running vsvars32.bat from the Common7\Tools directory, or sdkvars.bat from the .NET Framework SDK Bin directory):

sn -m n

Having done this, you should no longer need administrative privileges to strongly-name an assembly.