Wednesday 24 March 2004

Pot, meet kettle

Real's Glaser exhorts Apple to open iPod (via Paul Thurrott)

Yep, the CEO of RealNetworks is asking Apple to open their product. I still want Real to open up RealAudio and RealVideo formats so that I can use Windows Media Player - or any other player - to view their formats without having to install any of their crappy player.

Turn it around, Rob; are you going to open your player or your store? Didn't think so.

Saturday 20 March 2004

I take it back, again

RealDownload stopped downloading after 100MB. I think there's something up with either the download server (farm) or the download itself.

Friday 19 March 2004

I take it back

RealNetworks did produce one useful piece of software: RealDownload.

OK, the user interface is non-standard and a bit crappy (at the time, Real insisted on a massive icon in the top-left, against the Windows standard, which forces drawing the whole of the normal 'chrome' itself). But, unlike most Real software, it works. It does do popup ads which come up whenever a new download starts, but you get used to closing them pretty quickly (and anyway, I normally use it for unattended downloads).

I was joking earlier this week that the only good thing ever to come out of RealNetworks was Andrei Alexandrescu.

I've not used RealDownload for quite a while, actually, but I pulled it out after the Windows XP SP2 Preview download failed at 131MB twice. Since moving to broadband, most downloads have been very reliable just using IE 6.0's built-in download tool.

Why's my pagefile so big?

Larry Osterman answers the question, why do I need such a large page file?

An aside to this is, why does Windows seem to be constantly swapping out? The answer lies in the working set. Windows tries to keep a certain amount of physical memory free in order to allow memory allocations to succeed - in other words, it uses a bit of otherwise-idle time to clean up memory not used recently so that programs don't have to wait as much when they need more memory. It also tries to share out the amount of memory fairly. The downside is that, if your process's memory usage profile is poor, the system trims off a piece of your process's memory that you haven't used in a while - then you immediately reference it. This can happen if you have a large disorganised data structure that you scan oddly, or parts of your program code that are related are a long distance apart.

I thought I was being bitten by this in our server application recently, and tried using SetProcessWorkingSetSize to give us a bit more size. However, it had no effect. I surmise that what was in fact happening was that the Windows message queue (this server uses the Winsock control for incoming data) was growing, and growing, because the server wasn't keeping up. This seemed to be causing a lot of swapping, which led the server to get slower, and slower, causing more and more swapping. It wasn't a memory leak, as such, because the process's working set dropped right back down again as soon as the load was removed.

Tuesday 16 March 2004

.NET CF whinging

My major whinge is just that, where the overload count has been reduced, it's the most configurable overloads that have been dropped. Yes, this saves metadata in the runtime - but it hurts the programmer (in some cases having to avoid the framework entirely because you're boxed in). The original designers of Windows CE got it right - eliminate the simple functions (such as MoveTo, LineTo) which are entirely covered by more complex APIs (such as PolyLine). In .NET CF, you can't even create a pen wider than 1 pixel - because the overload that takes a pen width has been eliminated.

Other points of contention: no System.Diagnostics.Process class, so you can't create a process - you have to P/Invoke CreateProcess. You can't wait on more than one handle at a time with WaitHandle.WaitAny or WaitAll, because they've been removed (despite the underlying platform supporting WaitForMultipleObjects, at least for the WaitAny case). You can't poll a wait handle because the only overload of WaitOne left is the one that takes no parameters.

More seriously, it's not CLI compliant: the Thread class has no Abort or Join methods.

When trying to get back from a background thread to a UI thread, in order to update UI state (which you must, otherwise you may deadlock or have other synchronisation problems) you use Control.Invoke. .NET CF eliminates the overload which can take parameters: you're stuck with using the EventHandler delegate, which gives you the current object and an empty EventArgs. The desktop pattern looks something like:

private void ctl_HandleEvent( object sender, CustomEventArgs e )
{
if ( this.InvokeRequired )
{
this.Invoke(
new CustomEventHandler( ctl_HandleEvent ),
new object[] { sender, e }
);
}

// Do normal handling
}

You can't do this in .NET CF because you don't have InvokeRequired or the two-argument variant of Invoke. You have to cache whatever's in the CustomEventArgs somewhere, then Invoke a different method.

So the net result of omitting the metadata for these overloads (which cover the versions with fewer arguments) is that far more metadata is introduced into the program in order to try to achieve the goal.

And don't get me started on marshalling...

Mind your P/Invoke

It's important to get your P/Invoke [DllImport] declarations right in .NET. Josh Williams points out a specific problem on AMD64/x64.

I've been doing a huge amount of P/Invoke recently as we get our existing codebases ported to be usable in C#. We're mostly going pure-managed because we prefer having fewer large binaries to many small ones (which takes less of a hit on the loader). However, since we're dealing with the .NET Compact Framework, there are many, many places where the framework just doesn't have the features.

More on Visual Studio slip

Dan Fernandez posts more information on why Visual Studio "Whidbey" has slipped.

Friday 12 March 2004

Know your command prompt

Ian Griffiths points to a post by Junfeng Zheng about the NT command prompt, CMD.EXE.

Ian mentions that file and directory name completion are available in the NT command prompt. He doesn't mention that they're not enabled by default (or at least, filename completion is enabled, but with an obscure keystroke - Ctrl + D?) To get completion enabled, use TweakUI. I've set both completion keystrokes to Tab, which I was used to from bash.

Thursday 11 March 2004

Site maintenance

Quick maintenance update: a link to my blogroll is now over on the right on the website. For those of you reading in ATOM (or a translator, like Arcterex), it's here.

eVC 4.0 SP3

Via Pocket PC Developer Network.

Embedded Visual C++ 4.0 Service Pack 3 is released.

Heh. I wondered why Symbol's documentation for the MC9000-G (custom CE platform version) told you not to install eVC 4.0 SP2. Sorry, guys, no choice - I was already developing for CE.NET 4.2-based devices, including Pocket PC Windows Mobile 2003 on your PPT 8800.

Old news?

After yesterday's entry, I was browsing Robert Scoble's blogroll (OK, OK, I was looking to see if I was in it - apparently not) and discovered a link to Benjamin Mitchell's blog about an ASP.NET presentation at Microsoft UK, where Scott Guthrie stated a Q1 2005 release date for Whidbey. This was posted over a month ago (10th February).

It'd be nice to be kept up to date, really </sarcasm>.

Wednesday 10 March 2004

Oops, we slipped

Microsoft Watch is reporting that Yukon (the next release of SQL Server) and Whidbey (the next release of Visual Studio) have slipped to 2005.

Damn, I was looking forward to programming devices in C++ using Visual Studio. Embedded Visual C++ is a cut-down and modified version of Visual C++ 6.0. Not done very well, I might add; it's very crash-prone, particularly if used aggressively - I have a very short code-compile-test-debug cycle. Sometimes I could do with more thought, but if I leave it too long before compiling I end up going down blind alleys. Compiling and testing gives me confidence.

I was also looking forward to an improved .NET Compact Framework (I'll post my frustrations with writing a pretty simple control soon) and a more complete .NET Framework - also, a 64-bit Framework.

I wonder which bit slipped? Probably the CLR, if the list of job postings posted by an MS blogger recently (that I can't now find!) was anything to go by.

It'll also get me off Josh Heitzman's back ;)

Saturday 6 March 2004

Conflict of Interest

Mike Sax: Patents & Offshoring: Did you know that the USPTO has to be completely self-funded (it can't rely on your tax dollars), but a percentage of patent application fees is diverted to other, unrelated agencies by the US Congress?  And did you know that the US Congress, not the USPTO determines what application fees are?

Surely this is a massive conflict of interest: the patent office is paid by the people seeking patents. If the patent office want more income, they have to get it by processing more patent applications in less time.

Thursday 4 March 2004

Stupid conspiracy theories of our time

The Inquirer: If Longhorn runs on Power PC, what need for Intel?

OK, assuming that Microsoft isn't deliberately putting FUD in the channel surrounding Xbox 2 (my original theory was that MS were trying to deceive Sony, but the information is beginning to look a bit too solid for that), what will this mean?

Microsoft won't buy Apple to get access to PowerPC-based hardware. Their customers' investment in x86-compatible hardware and software is too great. The PowerPC G5 only just barely matches the performance of Intel's top-of-the-range chips, and we're about to see another big step in clock speed with the Prescott chips. Nevertheless, the G5s can probably manage to emulate an x86 quickly enough to run original Xbox games (after all, the Xbox only has a 733MHz PIII-class processor). I expect that Intel were too expensive and unwilling to reduce the massive power and cooling requirements of the P4 series - given that one of Microsoft's goals for Xbox 2 is to reduce the physical size, weight and noise of the console, which caused problems selling into the Japanese market. (Actually, an Xbox isn't much larger than a PS2 - it just looks bigger because the PS2 has a rather deceptive case design, with the bottom half of the front panel recessed).

Windows has run on PowerPC processors before. NT 3.51 and NT 4.0 CDs shipped with support for four processor families: x86, Alpha, MIPS and PowerPC. The PowerPC HAL, however, was designed for the Common Hardware Reference Platform - which never took off; the Power Macs aren't CHRP-compliant. Windows 2000 was to have dropped this to two, x86 and Alpha, but Compaq, having bought DEC, decided they would no longer promote or support Windows on Alpha. (This wasn't the end of the story: much of 64-bit Windows was first developed on 64-bit Alpha chips).

I don't expect Microsoft to release a new general port of Windows to Apple hardware. The market simply isn't there - you'd have to persuade an installed base of Apple owners (since Microsoft will never be able to get Windows pre-installed on Macs) that they would prefer to use a system with even less software available than their own. OK, Longhorn's WinFX API will largely be accessed through the .NET Framework, which performs JIT compilation from the Common Intermediate Language (CIL) stored in the binaries to an execution stream suitable for the host processor - but there's a whole host of legacy applications which won't run. Longhorn isn't intended to be all-or-nothing in this way.

Indeed, it looks like the other attempt to move the PC market to a more modern architecture - Itanium - could fall on the sword of poor x86 compatibility. In essence, an Itanium running x86 code using hardware emulation performs like a 1.5GHz 386 - not very well relative to modern machines - because it doesn't do any out-of-order execution or branch prediction. Software emulation (such as the IA-32 Execution Layer) could improve matters - benchmarks indicate that it can get close to the performance of a processor with a similar clock speed. Unfortunately, clock speeds for x86 processors are already over twice that of the fastest Itanium 2 - 3.4GHz for the newest P4Es compared with 1.5GHz for the Itanium 2.

On native code, the Itanium often blitzes a P4 Xeon at double the clock speed, largely because the instruction set is more expressive and the architecture reduces the need to hit main memory. While a modern x86 processor has many more registers internally than are visible through the instruction set, it can't easily tell whether writes to memory are actually only used because the program has run out of registers - so it has to take the whole hit of writing to and reading from RAM just in case the program depends on this state. Yes, writes and reads are cached - but writing to the caches still causes a bit of a stall.

Wednesday 3 March 2004

The SELECT/UPDATE problem, or, why UPDLOCK?

Ian's been having some deadlock trouble with SQL Server at work. I tried, but failed, to explain that two server processes running the same stored procedure could deadlock.

The problem comes when you need to update some rows in a table, but only when certain other data in each row is set. You can often do this simply by using the WHERE clause in the UPDATE statement, but if you need to set different values depending on the current values, or you need to update multiple tables simultaneously, it becomes more complicated. So we use a SELECT to get the current values and an UPDATE to write the new values, if we choose to.

The first thing to do is to ensure that we only write back if the data hasn't been changed. In SQL, each statement is atomic - either all of its effects are applied, or none are. However, here we need two statements, so we wrap them up in a transaction:

BEGIN TRAN
SELECT
   @value = Col1
FROM Tbl
WHERE
   RowID = @rowID

UPDATE Tbl
SET Col1 = @newValue
WHERE RowID = @rowID

-- Note, should check @@ERROR and ROLLBACK TRAN
-- if the update failed
COMMIT TRAN

Looks fine, right? Not always. Now I need to explain how SQL Server locks work.

Like all concurrent systems, SQL Server typically has more clients than available resources. It has to give an illusion of concurrent operations. The really, really hard way to allow transactions to operate simultaneously is to allocate new resources for every possibly-contending operation and reconcile them at the end of any atomic operation. The easy way is to lock the object to prevent contending operations, then release the lock at the end of the atomic operations. Locking reduces concurrency, but encourages correctness in a simple fashion.

SQL Server uses locking for concurrency. This is fine so long as locks aren't held for a long period of time. Reading a row takes a shared lock until the end of the atomic operation; updating a row takes an exclusive lock. If a shared lock is held, any other process can take a shared lock; a process wanting an exclusive lock must wait. If an exclusive lock is held, all other processes wanting a lock must wait.

With our query above, the SELECT takes a shared lock and holds it; the UPDATE escalates the shared lock to an exclusive lock.

Now, what happens if we run this query on another connection? Let's say we have queries Q1 and Q2, and to simplify things, let's assume that the server has a single processor. If the scheduler decides to run Q1, and is then interrupted to execute Q2, the following could happen: the SELECT from Q1 runs and takes a shared lock. Then, the SELECT from Q2 runs and takes another shared lock. Now Q2 is interrupted and the scheduler runs Q1 again, which tries to take an exclusive lock, which is blocked by Q2's shared lock. Q1 blocks so the scheduler runs Q2, which tries to take an exclusive lock to do an UPDATE, but is blocked by Q1's shared lock. Result: deadlock - neither Q1 nor Q2 can progress because they're both waiting for the other to finish.

You could give SQL Server a lock hint to take an exclusive lock instead of a shared lock when executing the SELECT, by specifying (XLOCK) after the table name. This stops the deadlock, because both will now try to acquire the exclusive lock, which means one must wait for the other. This has the nasty side-effect of preventing anyone else who just wanted to read the data from reading until we decide to update.

For this reason, SQL Server has another lock type: an update lock. The rules for this lock are simple. If no lock is held, or only shared locks are held, the update lock can be taken. Only one process can have an update lock, but other processes can take shared locks while an update lock is held. If the process holding the update lock wants to write, it is upgraded to an exclusive lock. So if we add the update lock hint (UPDLOCK) to our SELECT, Q1 and Q2 will now perform atomically, one after another, without deadlocking, while other processes can read the selected rows (at least, until we UPDATE).

BEGIN TRAN
SELECT
   @value = Col1
FROM Tbl (UPDLOCK)
WHERE
   RowID = @rowID

UPDATE Tbl
SET Col1 = @newValue
WHERE RowID = @rowID

-- Note, should check @@ERROR and ROLLBACK TRAN
-- if the update failed
COMMIT TRAN

Monday 1 March 2004

Why aren't people studying Computer Science any more?

Scoble: Why aren't students going into computer science?

Well, at my time at Aston, the number of CS students was rising. However, when I graduated in June 2001, we'd only just started to see the beginning of the dot-bomb, and the terrorist attacks on New York were still a few months away. Since then, the economic conditions for software developers have become a lot worse.

When I was looking for jobs in July-August 2001, there were literally hundreds of vacancies, covering many pages, advertised in every computing journal and national newspaper. This went down to about half a page last year, and has recovered to about a page. Searching on sites like Monster.com (etc, don't take that as an advert or a recommendation) would also give hundreds of vacancies; now it gets you about five.

I think what's happened is that the people that Ian and I termed the 'mercenaries' have started looking elsewhere. For a while, it looked like you could make a lot of money out of software; now it looks like you can make a living. I don't think you should blame this on Microsoft - their market share is not much greater than it was four years ago (does an extra 1 - 2% mean all that much when it's more than 90% already?) The mercenaries weren't doing it because they loved the challenge of working with software; they were doing it for the money. These tended to be the people who complained that the coursework was too hard.

Well, guess what, software is hard. A lot of people think that they can translate a bit of hacking at simple programs into strong, reliable, easy-to-use programs. You can't. As soon as you need to handle errors, rather than ignoring them, and you need to deal with asynchronous operations, and simultaneous operations, you need to think about how your program will work. You can't always experiment and find out, because testing does not prove that your program is correct. It only proves that no errors occurred during the most recent run of your tests, which may not be sufficiently complete (you can only look for bugs that you think might be there). Those are the software tools we have, but the best tool is in between your ears. A lot of developers never understand this.

The OSS movement try to suggest that software is easy and anyone can hack on it. Not true at all. The whole tone of The Cathedral And The Bazaar tries to suggest that professional software developers are developing a priesthood to restrict the Average Joe from getting involved in programming. I don't think we are; I think professional developers have seen through the superficial simplicity of programming to the murky depths of complexity lurking below.

A CS degree can help educate developers about the need to understand your program, and provide the skills to write software. (It can also indoctrinate people in the One True Way to develop software, which is not a good thing). All told, I'd rather see people with a CS degree developing software than people without; you do get excellent self-taught programmers, but you get a lot of poor ones too.

16-bit Apps

Raymond Chen: Why 16-bit DOS and Windows are still with us

As a development organisation, we have a number of DOS apps that are essential to us. We still deal with a lot of DOS-based hand-held terminal hardware (e.g. the Symbol PDT68xx, 61xx or other 3000-series) - indeed, I think we wrote three or four entirely new DOS-based applications last year. Where possible, though, we try to use our application server software and write the actual application with a desktop development tool (VB6 or, recently, a .NET language). This is only possible in a wireless LAN environment, though - while it works over a wireless WAN, this is obviously quite costly for a thin-client environment.

Until the appropriate vendors come up with Win32-based toolsets for these devices, or they die out completely, we need DOS compatibility. Our main compiler for these platforms is still Visual C++ 1.52 (the second most common is Microsoft C 6.0!) However, we also develop for Windows CE and require eVC and Visual Studio .NET 2003. So I have compilers on my work system that are more than ten years apart (IIRC).

Less important Win16 programs include B-Coder Professional. We're still using version 3.0 because, well, it works, and 4.0 offers only a few extra symbologies for a large outlay of funds. The configuration tool for a D-Link network printer adapter is also a 16-bit app (it has a web configuration tool, but that's not very helpful when you don't know the device's IP address).

However, I'm contemplating moving the development environment into a Virtual PC VM. After all, I don't use the IDE for developing DOS applications, except to maintain the makefile and perform the build. Any coding is usually done in TextPad, if it's not a cross-platform project such as the application server's thin client (where it's normally done in Visual C++ 6.0 or in eVC 3.0).

For the most part, though, Windows CE-based devices now cost less to purchase than the DOS devices, and are getting closer in functionality, design, and battery life. The new MC9000-G looks to be on a par with the old PDT68xx in ergonomic terms.