Monday, 30 August 2004

Lucky/unlucky numbers

The whole business with lucky or unlucky numbers is ridiculous. Today's post on This Is Broken shows a lift with the floor request buttons in right-to-left, bottom-to-top order, but a lot of the comments noted that there's no button 13. This is not because you can't get to floor 13; it's that there is no floor 13: 14 is directly above 12.

Coincidentally this weekend Michael Schumacher won his seventh Formula 1 world championship title on the occasion of Ferrari's 700th race, but he only came second in the race. This was the 14th race of the season; Schumacher won the 13th race at Hungary two weeks ago, on a track where the previous year he'd been lapped before the end.

Car number 7 hasn't been too lucky for Jarno Trulli this year (if anyone had a lucky number, you'd have to say it was Schumacher's number 1, except that he's earned it) although he did qualify first and go on to win at Monaco. Car number 13 hasn't been unlucky for anybody; just like the architects, F1 doesn't issue number 13. By rights it should have gone to Mark Webber, but he's really done as well as could be expected with the horribly unreliable and underperforming Jaguar. Of course, Jaguar got cars numbered 14 and 15 (rather than 13 and 14) - because they finished 7th last year!

Sunday, 29 August 2004

Contact Me

I've been trying to think of a way to get a general 'contact me' link up that doesn't put a public email address on the web - I get enough spam already, thanks. While I have a triple-pronged approach (Cloudmark SpamNet for known-spam, SpamBayes for heuristic scanning, Brightmail at my [mail] ISP) I still have to download the stuff that gets through Brightmail - about 16 items per day at present, about 25% of which are actually non-delivery reports where someone's used a fake sender address with my domain in it.

Anyway, enough digression. If you want to just send me stuff, please reply to this post. I'll try to delete your comment after I've dealt with it, if you want me to. I'll put a link to this post in the sidebar. If you don't want to register, please just use the anonymous comment facility; if you do leave an anonymous comment, please leave your name!

Saturday, 28 August 2004

Time for a new portable music player?

Back in 1999 I bought a Sony MZ-R55 MiniDisc recorder (in yellow), in part to replace the increasingly-unreliable cassette players, for the princely sum of £250. I'll admit there was an element of comfort purchase about it - women buy clothes, geeky men buy techie toys...

Now it's over five years old and on the whole it's doing OK - it's got a slight dent in the top cover where I must have dropped it (or something on it) pretty hard, and the battery cover has lost a lot of its metallic paint - the rest of the shell is anodised aluminium. It's on its second rechargeable 'chewing gum' battery (the battery is about the size and shape of a pack of chewing gum) and the second set of earphones - the first set I'd used previously with a number of cassette players and had lasted about eight years (!), the second I bought last year after the first set's junction between the main stereo cable and the cable to the right earphone finally failed. The unit shipped with crap earphones as always. Despite having a short cord and a long extension, making them suitable for use with the remote, I always use the extension and never the remote. Basically I don't like showing off that I have it - all anyone can see is the cable going into my pocket.

However, it's started to report DISC ERROR periodically, particularly with discs I've only recently recorded. I did get a cleaning disc which may have helped a little. I have to punch in track information manually, although my newer stereo, bought a couple of years ago, outputs the track mark information on the optical cable so I no longer have to mark the tracks myself as I did when I first bought it. And the battery life is pretty short: the original spec says four hours but it seems less. The NiMH chemistry deteriorates over time. Creating a new MD from source material means recording in real-time. And the player only supports the original bitrate (now named 'SP') or the ability to double recording time... by recording in mono.

So I'm contemplating a new player. I can think of a few choices:

  • A new NetMD player
  • A hard-disk-based player
  • A RAM-based digital music player
  • Using my Dell Axim x30 with 128MB+ SD cards

Let's look at these options. The RAM-based player and the Axim are much the same, except that the Axim is obviously not a fixed-function device. I've tried this option, but it's a bit noisy, and leaves gaps between tracks. You can't get a whole album at a decent bit-rate into the RAM. By my calculations, music recorded at 192kbps takes up about 113MB for 80 minutes. SD cards at 128MB cost £16 from dabs. I'm not paying as much, or more, for the blank media as I did for the music. OK, I could put multiple originals on one card - I'm not sure if the Axim would support a single 1GB media at £84/card, but we're still looking at about £11 per album (depending of course on the album!)

RAM-based players either have the same problem as above, or only have internal RAM, making it time consuming and irritating to change media. If I want to play something else at the moment, I can carry a number of media around with me and simply change discs. With an internal-memory player, if you can't put your whole collection on it, you can only change what's on it by synchronising with a PC. That's not for me. With the portable MD recorder I can hook up to an optical digital or an analogue source (if I remembered the cables...) to capture new music.

OK, that's two options discarded. What about hard-disk players?

They're bigger than my old R55. Sony have managed to squeeze their new MZ-NH1 down even from that, which is pretty impressive considering they're dramatically limited by the size of the disc. An iRiver H120, like Ian Griffiths recently bought, weighs nearly twice as much (although both weigh considerably less than the '55). However, the H120 costs £215 from Amazon, whereas the NH1 costs £225. That's pretty comparable, considering that the NH1 has essentially limitless upgradability. Standard MDs have 80-minute capacity with the old 292kbps ATRAC encoding, but a Hi-MD player can squeeze 305MB out of a standard disc, allowing (allegedly) 140 minutes with the new Hi-SP codec.

To be honest, though, I don't want to cram all my music collection onto one disc or player. I don't use shuffle much - I like to hear albums as the artist intended. It may not be an issue in the pop world, but I'm a rock fan, particularly progressive, alternative rock, or indie, where the album as a whole tells a story. Organisation is a key feature for me. At only a little over £1 per disc for an 80-minute MD (a little under if you buy a 50-pack!) expandability is fine. I can't find out how much the new discs cost, which is a bit of a warning light.

Another feature of course with MiniDisc is direct interchangeability of discs with other players/recorders. My car currently has a cassette player, but I've thought about changing it for something else. Pushing your high-quality digital music through a cassette adapter or FM broadcast is a bit on the stoopid side, IMO.

Battery life seems, incredibly, a little better on the newest MiniDiscs than on the HDD players.

Looks like it's going to be MiniDisc, again. I might be able to pick up an MZ-N10 from eBay, since there's a few retailers that still seem to have a small stock (and there are 'buy it now' prices at £120). Otherwise I think an MZ-NH1.

[Edit: rashin' fruffin' mashin' (speak like Muttley) double-encoding bug, note cool new WYSIWYG editor on blogger.com]

Svchost is spyware???

When writing that last post, I searched for svchost (before deciding to link to Larry's comment), and got these Sponsored Links:

Svchost.exe is Spyware

I'd be pretty dubious about using any piece of anti-spyware software that flagged svchost.exe itself as being spyware - it's just a shell. However, various keyloggers and other trojans might be installed into that shell. As always, they can't do this unless you're running as an administrator (or you've changed the ACLs on the keys).

You do need to use your head when reading anti-spyware reports. I ran Ad-Aware a couple of days ago, and all it reported was my use of about:blank as my start page (Lavasoft: about: is a genuine protocol) and a bunch of innocuous cookies.

Edit: After a number of comments about different permutations of the name and mass-mailer worms using the actual name but in a different directory, I feel I should point out that viruses (true file-infecting viruses) can infect any binary. What I was trying to point to in this post is that the advert baldly says 'Svchost.exe is Spyware'. That goes too far.

Run Elevated - part 1

[Let's see if I finish this series...]

Last week I wrote about running with reduced privileges. In that post I talked about using the LsaLogonUser API to create a new token, adding additional SIDs, and writing a service that would do this. In this post, I'm going to talk about some design aspects.

Let's summarize the requirements:

  • We want an API for creating a process with additional SIDs;
  • The user should, if permitted, be able to perform this operation by supplying a single password at most;
  • Administrators will be able to allow use of this service in two ways:
    • Define somehow a set of users who can always use this service - perhaps different sets for upgrading to Administrators access and for upgrading to Power Users;
    • Allow anyone by also supplying credentials to override the above - typically this will be any member of the Administrators group, but we should make it configurable;
  • This API will use some form of remote call into a service due to the privilege requirements of LsaLogonUser;
  • The created token must have the ability to interact with the caller's login session;
  • The created process must appear on the appropriate user's desktop (incl. Terminal Services sessions).

Let's look at how we do this. For authorization, we should leverage the existing security APIs as far as possible - this allows us to use the standard security dialogs which administrators are already familiar with. Therefore, we'll use the AccessCheck API to check ACLs that we store somewhere in order to check whether the user's authorised, or whether the override credentials they present (option 2 above) are authorised to override.

For consistency, even though we'll have the user's alleged password, we'll have the service impersonate its client to do access checks. We'll therefore be using their existing token, not any changes that have been made to the token since they logged on.

The right place to store the ACLs is probably somewhere in the registry; I'm going to store them in the service's Parameters key. Let's choose a name for the service: I'll call it RunElSvc. This makes our Parameters key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\RunElService\Parameters. We'll put an ACL on this key itself so Administrators and LocalSystem have Full Control, and no one else has any permissions. Note that this is exactly what the SCM does - the Security key under the service contains a security descriptor (in binary format) and the key itself has an ACL as above.

I'm not sure whether it will be possible for a domain administrator to control this through Group Policy - something to look into later!

We'll also need to define the security mask bits for the ACL - the specific bits and the mapping of the generic bits. Ironically here the standard bits (DELETE, READ_CONTROL, WRITE_DAC, WRITE_OWNER and SYNCHRONIZE) don't mean a lot for this 'object'. So I think we just define bit 0 to mean 'can' if set and 'can't' if not, and map GENERIC_EXECUTE and GENERIC_ALL to set bit 0 if they're set. Actually, we could use a single security descriptor with multiple bits (one each for 'can raise to Administrators', 'can raise to PU', 'can override other bits') rather than multiple SDs.

We want the actual API to seem familiar, so starting from CreateProcessWithLogonW is probably a good idea.

BOOL CreatePrivilegedProcessW(
  LPCWSTR lpPassword,
  LPCWSTR lpOverrideUsername,
  LPCWSTR lpOverrideDomain,
  LPCWSTR lpOverridePassword,
  LPCWSTR lpApplicationName,
  LPWSTR lpCommandLine,
  DWORD dwCreationFlags,
  LPVOID lpEnvironment,
  LPCWSTR lpCurrentDirectory,
  LPSTARTUPINFOW lpStartupInfo,
  LPPROCESS_INFORMATION lpProcessInfo
);

Internally the service will call CreateProcessAsUser once it's got a token. We'll have to do a bit of copying to ensure the right environment, etc., is used rather than the service's environment.

This wouldn't be complete if we didn't think about what possible threats there will be. The major one is probably remote exploitation, which we'll mitigate by using the ncalrpc protocol. We do have to watch for the possibility that some other protocol is used, if our service ends up sharing a process, since RPC allows the union of all protocols registered by all endpoints in the process. My initial plan is for the service to run in its own process, but eventually it might run in a svchost (thanks for the comment, Larry!)

I'll have to admit I don't have experience of threat modelling.

Friday, 27 August 2004

Aspects of CS not taught at Uni

Eric Sink has just started a series of articles on Source Control: an introduction for SCM novices. In his introduction, he writes:

"My goal for this series of articles is to help people learn how to do source control.  I work for SourceGear, a developer tools ISV.  We sell an SCM tool called Vault.  Through the experience of selling and supporting this product, I have learned something rather surprising: Nobody is teaching people how to do source control.

"Our universities don't teach people how to do source control, or at least mine didn't.  We graduate with Computer Science degrees.  We know more than we'll ever need to know about discrete math, artificial intelligence and the design of virtual memory systems.  But many of us enter the workforce with no knowledge of how to use any of the basic tools of software development, including bug-tracking, unit testing, code coverage, source control, or even IDEs."

Coincidentally, I also saw Jim Gries' Debugging Glossary post when reading through the blogs.msdn.com feed (actually I read this through the browser rather than either of my aggregators), in which he says:

"[D]uring a recent UI meeting here at work, one of our PM's (Habib Heydarian) mentioned that while at Tech Ed, he was shocked at just how many developers don't have an understanding of the terms used while debugging, let alone how to use a debugger effectively."

In my final year at Aston, the CS department were looking for ways to improve the performance of the Combined Honours students taking the Introduction to Systematic Programming course (no longer on the website; since I wrote my last post on the subject, the site has been updated, and now the initial programming course [PDF] is taught in Java). Formerly, lab sessions were run by either the lecturers or by post-graduate students, many of whom were unfamiliar in Ada, the language taught. That year, the department decided to ask final-year undergraduate students who had scored highly in ISP to assist with lab sessions. I was one of those people. The hours weren't onerous (two or three one-hour sessions per week) and the pay reasonable at £8 per hour, which helped a little.

By this point I'd bought and read, on Ian's recommendation, Steve Maguire's classic Writing Solid Code, in which he recommends single-stepping through every new piece of code you write using an online interactive debugger. Doing this as a matter of course opened my eyes - desk-checking a program is all very well, but you run into the problem where you think you know what the code does. You miss the bugs because you're not looking for them. There's no substitute for actually running the program and seeing what it does, and if you single-step, you find out exactly what code is hit and what your variable values were.

When we were asked for suggestions for the course, I suggested including a session on using the debugger. We were limited of course by the available tool suite: using GNAT as the compiler and hence GDB as the debugger (actually the gdbtk GUI front-end). However, any debugger is still better than trying to debug through print statements. The edit/compile/run cycle to add a new statement to print some variable you missed last time distracts from the sessions.

This suggestion was radical because many lecturers were of the opinion that online interactive debuggers are useless; after all, you've formally proved your specification, used a CASE tool to generate your code and desk-checked any hand-written code - how can there possibly be bugs after that?

Over here in the real world we know that formal proof works only for really small systems and takes a really long time and therefore costs a lot. We also know that a CASE tool can only really generate programs that the CASE tool's designer intended it to build - novelty is heavily restricted with these tools.

CS course designers: do your students a favour by learning how to use a debugger, then teach your students. If you need a resource, or a course textbook, try John Robbins' Debugging Applications for Microsoft .NET and Microsoft Windows. Yes, it's Windows-centric. Yes, it's far more than just an introduction to debugging - it even has some material on optimising your working set toward the end. But the first three chapters (part 1 - The Gestalt Of Debugging) are a must-read for all developers, on any platform, with any language, using any debugger.

Thursday, 26 August 2004

A useful resource for service information

I've found TheElderGeek.com's guide to services useful.

My definition of svchost.exe is that it's a process which supports plug-in service DLLs. You see multiple svchost processes to mitigate the effects if one service running in a host process crashes, and so that different services can run using different credentials (and hence privileges).

Currently I have six svchosts running, three of which run as LocalSystem, two as Network Service and one as Local Service. One of the LocalSystem hosts is running Terminal Services and DCOM Launcher; another is running HTTP SSL for the HTTP.SYS driver (new in Server 2003 and XP SP2); the third runs most other services. One of the Network Service hosts runs RPCSS, the Remote Procedure Call subsystem while the other runs DNS Client. Finally the Local Service host runs the TCP NetBIOS Helper, the Remote Registry service, the Universal Plug-and-Play listener, and the WebDAV client.

'Scuse me, just going to turn off Remote Registry, I don't need that.

Do Scheduled Tasks need the Task Scheduler service?

After the last entry and discovering that the at trick still works, I asked myself this question: do tasks created through Control Panel > Scheduled Tasks need the Task Scheduler service?

Sadly, they do.

However, I don't currently have any scheduled tasks, so I've disabled the service. If you want to do the same, either use the Services MMC snap-in, or you can use sc...

sc config schedule start= disabled

Edit (2004-09-02): OK, I've re-enabled it. A number of readers (I have readers?!?) mentioned that a number of system optimisation features also run using the Schedule service, something I confirmed with Filemon. Obviously the ability to create a job that starts as LocalSystem is a bit disconcerting, but it seems that this can only be done locally by an authenticated administrator. I'd prefer to have a separate optimisation service, but you can't have everything.

Wednesday, 25 August 2004

I knew something was bugging me...

The Register: Windows XP SP2 features security crater - report

Basically, a malicious program can add a new WMI provider to the root\SecurityCenter namespace - an instance of AntiVirusProduct or FirewallProduct. Windows XP SP2's Security Center may then report that an anti-virus product or firewall product is installed and working correctly, even when one is not.

The mitigation, though, is that WMI is indeed read-write - for Administrators. PC Magazine's original article (attributed to eWeak by The Register's pet hack) makes this clear.

I was thinking along the lines of changing the ACL on this namespace so that only LocalSystem is able to create new providers, but failed since the ACLs are inherited - but you can't clear the inheritance in the MMC WMI Control snap-in (it's in the Computer Management console). However, this would only add a small hindrance since by default on XP SP2, at.exe still exists and still allows you to do

at 23:39 /interactive cmd

to get a command interpreter running as LocalSystem:

C:\>dumptoken
Process primary token
This is a unrestricted token
Token type: primary
Token ID: 0x25066b9
Authentication ID: 0x3e7
Token's owner: BUILTIN\Administrators (alias)
Token's source: *SYSTEM* (0x0)
Token's user: NT AUTHORITY\SYSTEM (user)
Token's primary group: NT AUTHORITY\SYSTEM (user)
[...]

(dumptoken from w00w00.org)

This only works if you're an administrator, of course.

And if you think about it, most WMI providers get registered with WMI by an installer, running as an administrator. I'm not too sure about the default ACL for the root namespace which allows Everyone to modify provider information, though.

Anyway, what's the point of misreporting the firewall or antivirus status? A program running using a token containing the Administrators group SID (or, loosely, running as an administrator) can punch a hole in the firewall using the firewall administration API - how many users will regularly check the list of exceptions? It can normally install a device driver. It could install a file-system filter driver below the anti-virus filter driver in the disk driver stack and remove any trace of its own code (you'd probably need to add another one above to add the code back in so it could run).

The true mitigation, as for most things, is to run as a regular user. Don't run as a member of the Administrators group.

Friday, 20 August 2004

Heap in NT can use a lot of virtual addresses

Raymond Chen has recently been writing about the /3GB switch, virtual memory and physical memory. If you don't understand what virtual memory is, go there now.

Back? Good. I want to talk about a situation where you might run out of virtual address space a long time before you run out of RAM. Exchange's documentation recommends that if you have more than 1GB of memory you should enable the /3GB switch. As Raymond notes, basically the STORE.EXE process, due to inefficiencies, uses more virtual address space than actual RAM.

In a comment on Carmen Crincoli's blog, Larry Osterman mentioned that the NT heap manager expands the virtual address space for a heap in powers of two. Note that this doesn't mean the physical memory used goes up like that - Windows will only allocate physical memory to a process on demand and only up to its maximum working set size, unless memory is more abundant than that. However, you can get into a situation where your heap's virtual address space resembles a Swiss cheese - there's plenty of free memory, but it's all in small free blocks. To allocate a large block, Windows has to expand the heap's address space, which it does by allocating a new chunk of address space twice the current size of the heap. It must be possible for this to be multiple chunks - this algorithm would start running into problems at around 512MB or so of virtual addresses if it allocated a single contiguous block...

What's the initial size of the heap? It's marked in the executable's header - the default generated by the MS linker is 1MB.

Note that Windows supports multiple heaps. The recommended practice is actually to have one heap for each distinct type (or at least size) of object; I think I've written about this before. Unfortunately many APIs don't give you a choice where something is allocated - they just dump it on the default heap.

[Edit: I knew something was bothering me about the original title - I confused VM with virtual addresses - as we know, they're not the same thing...]

A horde of whiny bastards

...have taken over the Channel 9 forums. You can tell that few of them have ever run a server OS for anything real, and have swallowed the Unix pill whole (although I suspect some have taken it as a suppository - there's certainly something stuck up their arse...) Few of the developers seem to have actually shipped a product and have any understanding of what it actually takes to maintain software and provide compatibility between versions - nor that users need compatibility between versions, or that MS might have, you know, done some research.

It's a shame because C9 was getting something of a conversation going between Microsoft users (as opposed to abusers) and the product teams. I'm still going to watch the videos, though.

In other news, I see that the horde now seem to have got bored with trying to bait the IE blog.

Sunday, 15 August 2004

Running with reduced privileges

I know I have to get to it sometime. In the current climate, anything you can do to mitigate the effects of a successful attack is worthwhile - and there's no sign of things letting up.

So I've taken my normal user account out of the Administrators group - I'm just a user again.

Now, clearly, as a power user, a developer, an occasional gamer and the machine owner, I still need to be able to launch programs as an Administrator. The runas tool is helpful here! However, there are times when you want a program to run with elevated privileges, but in your own context.

Aaron Margosis, a Microsoft consultant, has come up with a couple of useful tools. The first is a batch file which automates the process of adding your account to the Administrators group, creating a new shell, then removing the account again. The second is a toolbar which shows how privileged you are, in IE and Explorer.

If you're familiar with how Windows authorization works, feel free to skip this paragraph. Windows authorizes users against resources by comparing access control lists against the SIDs (Security IDs - internal representations of user accounts or security groups) in your token. The token can either be a primary token or an impersonation token. You typically get a primary token by logging on; servers can temporarily assume another identity by impersonating their client. Also included in your token are your privileges - actions you're allowed to take which may override the ACLs or allow other actions (e.g. shutting down the system). Privileges are orthogonal to access rights. The token is a cached copy of what you were allowed when you logged on - it contains the SIDs for all groups you were a member of and the privileges assigned to all SIDs.

Adding yourself to the Administrators group therefore has no effect until you next log on. Aaron's batch file gets round this by using runas to create a new logon session and hence a new token. You cannot replace the primary token for a running process - you can only create a process with a different token by calling CreateProcessAsUser.

The downsides to Aaron's batch file are two: one, that it requires you to type both the Administrator and your own password; two, that there's a window where your account is explicitly in the Administrators group (between typing the Administrator's password and typing your own).

This second one is possibly a risk. If your system crashes at this point, you're likely to still be in the Administrators group when you restart. If a service running with your credentials starts in this window, it will have privileges you didn't expect. A network logon with these credentials will also gain administrative privileges.

Can we do better? I think so. The LsaLogonUser function takes a TOKEN_GROUPS parameter which allows us to arbitrarily add groups to the new token, assuming successful authentication. The spike I put together over the last two days suggests that any privileges awarded to the (enabled) SIDs added to the token are also added to the token. This only requires us to supply the user's password - we don't need the administrator's. We can't totally automate it because Windows doesn't store the authenticated password in plain text - unless we circumvent things further, e.g. by adding a 'network provider' so that NPLogonNotify gets called with the plain-text password, and storing this somewhere. This would of course be a very bad idea, enabling the possibility that the plain-text password could be disclosed.

Lest you think this is a horrible hole in the NT security model, I'll point out that to do this, you must have a trusted connection to the Local Security Authority. To get one of these, you must call LsaRegisterLogonProcess, which requires the 'Act as part of the operating system' (SE_TCB_NAME) privilege (see what I mean about privileges being different from access rights?). In essence, that means running under the LocalSystem security context.

So where from here? Basically, I need to write a service which will listen for local requests (probably through RPC) and use the supplied credentials to create a token then create the appropriate process. This is going to be an adventure because before yesterday I'd never written a service! I'll be writing it in C++ because quite frankly there'd be no benefit in using C# due to the amount of P/Invoke marshalling required. I'll have to learn how to write RPC servers and clients, work out which logon provider to call (obviously the same one that the user used!) and how to write events to the event log. I'll also need to design access control so administrators can control who gets to do this.

Saturday, 14 August 2004

SP2 Torrent site gets DMCA cease-and-desist

I see from comments on Robert Scoble's blog that Downhill Battle have been asked to remove the SP2 torrent links. The reason is apparently that they don't want things to be downloaded from other sites. I suppose it prevents MS monitoring how many people have actually downloaded it - but how can they guess how many times a particular download, particularly of the network install, has been used?

BitTorrent has mechanisms to check that the download was received correctly: the Torrent file includes precomputed hashes for each block so that the client can determine whether it has received that block successfully. These hashes use the SHA1 algorithm. The whole download is also hashed - a download can be made up of multiple files. This hash is used to locate the peers.

In the particular case of SP2, the file is also digitally signed by Microsoft. This ensures that it really is the genuine release, or at least that the file hasn't been modified since it was signed and it was signed by an organisation claiming to be Microsoft. It appears to have had the right effects, so I'm sticking with it.

Interestingly, though, Microsoft haven't tried to take down the tracker. Clearly there's a lack of understanding in how BitTorrent really works.

I'm trying to decide whether to host the .torrent file myself. I'm still hosting the torrent itself - upload rate currently 24KB/s.

Thursday, 12 August 2004

XP SP2 limits Raw Sockets

Ian Griffiths: Raw Sockets Gone in XP SP 2.

Michael Howard: A little more info on raw sockets and Windows XP SP2.

I think what Michael's basically saying is that data sent through a raw socket is parsed by the stack, any packets with protocol 6 (TCP) are discarded, and any packets with protocol 17 (UDP) must have an outgoing IP address matching one of the interfaces or it's dropped. This prevents programs using raw sockets to disguise what they're up to using the standard Internet protocols - hopefully preventing zombies from disguising the origins of their packets.

If you really need to do this, you can still use a device driver which avoids the Microsoft TCP/IP stack entirely, such as WinPcap. Dana Epp's already produced a patch for nmap which causes it to revert to using WinPcap (it's a two-line patch!)

This probably means, of course, that new zombie programs will just come with WinPcap, because I don't think SP2 limits driver loading (aside from it being limited to users/groups with the Load or unload device drivers privilege, granted to the Administrators group by default).

Say 'No' to IL in silicon

Ian Griffiths wrote an interesting article. However, I have to take issue with a bit of it:

"A clear example of where general purpose CPUs have become fast enough to render special purpose chips obsolete is in mobile phones. (Or 'cellphones', as I believe they're usually called in the USA.) A few years ago, all digital mobile phones had two processors in them: a general purpose one to handle the signalling protocol and user interface, and a specialised digital signal processor (DSP). [...] Having these two processors was a problem. The extra processor made phones bigger, shortened battery life, and decreased reliability as a result of the increased component count. But it was necessary because only a specialised DSP was fast enough to perform the necessary processing, while a general purpose CPU was required to handle the signalling protocol and user interface. But a few years ago, the performance of low-power embedded CPUs (and in particular the ARM CPU) got to the point where one general purpose CPU could do all of the work."

My understanding is that the DSP is in fact still used - but it's no longer a separate chip. Instead it's been absorbed as an 'IP core' into the same chip which houses the display controller, keyboard controller, serial interface controllers, memory controller and the CPU core. It's still loosely labelled the 'CPU' but to reflect the changed role it's now sometimes called an 'application processor'.

See, the way chips are designed has changed quite a bit. When I was at University, and still studying Electronics, we were taught how to use a language known as VHDL. Originally this language was used for simulating circuits before producing them, but a changeover was in progress for instead synthesizing circuits: getting a compiler to generate code for programming a programmable logic chip, and eventually actual masks for producing silicon.

This has lead to a market in, essentially, source code for circuits: the ability to buy in IP cores for a dedicated chip. By combining IP cores a single chip to run a device can be produced - if you're producing millions of them, you can save a heck of a lot. A major reason ARM have been so successful is that they've licensed their processor cores, in addition to manufacturing their own CPUs - basically the core connected directly to the pins. Almost every ARM-compatible application processor out there - Texas Instruments, Samsung, whoever - is basically a licensed ARM core - an actual ARM design - decorated with whatever other cores the ODM (Original Design Manufacturer) decided to throw in.

Take my new Dell Axim X30. It features an Intel PXA270 processor. This single chip has the latest XScale core (Intel's design implementing the ARM5 instruction set - Intel actually design their own cores, rather than using ARM's design), a memory controller capable of controlling SDRAM, Static RAM and Flash ROM, an LCD controller with 256KB of on-board RAM, a USB host controller, a camera capture interface, Smart Digital media I/O controller, a SIM card interface, keypad controller, 3 standard UARTs (serial devices), audio controller, CompactFlash controller, and USB client implementation. Some of these parts are probably licensed cores (although Intel do have the resources to design their own).

[Edit] I need to review these things before posting them. I forgot to add this bit:

Recent ARM cores do have Java bytecode interpreters apparently in hardware - ARM call this Jazelle. However, it must be remembered that Java was originally designed to be interpreted. CIL was not - it was designed to be JIT compiled. Eventually even Sun had to realise that JIT-compiling Java produced far better-performing programs than interpretation ever could.

Tuesday, 10 August 2004

Cache management

I'm sticking it out with IE after what I discovered about the current state of FireFox security. Unfortunately, in SP2, it seems as though the cache bugs are still there.

So I've started using CacheSentry, a free program for monitoring and cleaning up your cache. And I think I'll be sticking with the free version: the (proposed) UI for CacheSentry Pro is close on the verge of a submission to The Daily WTF or This Is Broken.

Slight disadvantages are that CacheSentry's limit for cache size is 700MB, and IE doesn't behave quite right when it hits CacheSentry's limit - I started getting stylesheet-missing problems again. However, dropping the limit to 500MB then expanding it again seems to have worked for the moment.

The author has documented some bugs in IE's cache mechanism. I say IE's - I should say WinInet's, as that's the component that does the caching.

How does TransmitFile actually work?

Last week, Larry Osterman blogged a little about TransmitFile. I was sure I'd read somewhere that it was implemented entirely in kernel mode, but I couldn't find that assertion.

So I pulled out DUMPBIN. This is what I found, after writing a little test server I call Really Simple File Transfer Protocol - it listens on a socket, then when a client connects it dumps a file down the connection; like I said, Really Simple. It's so simple it doesn't allow concurrent connections.

The TransmitFile function in MSWSOCK.DLL is a simple wrapper around a call to WSAIoctl using the SIO_GET_EXTENSION_FUNCTION_POINTER control code, then calling the result (if WSAIoctl does not return SOCKET_ERROR). For a standard TCP socket using Microsoft's stack, the WSAIoctl call returns the address of mswsock!MSAFD_TransmitFile (an internal function, you won't see it in the export table). In turn, this function does a bit of checking then calls ntdll!NtDeviceIoControlFile, then waits for an object to be signalled and returns. NtDeviceIoControlFile is the underlying NT function called by DeviceIoControl - MSWSOCK.DLL is calling into the TCP device driver, MSAFD.SYS.

I decided to stop there, as I already had enough of a headache from reading x86 disassembly...

How does Explorer know which zone a file was downloaded from?

In this post, I showed a screenshot of the 'Open File - Security Warning' dialog. But how does Explorer know to show this dialog - and how could you make it show in your own application, or write out the information from your own download application?

When Internet Explorer, Outlook Express, or Windows Messenger in XP SP2 write a downloaded file, they use the IAttachmentExecute interface (I think - the documentation is obscure). This writes an Alternate Data Stream on an NTFS drive, which is named 'Zone.Identifier'. For the HS5Setup program I took the screenshot for, it contains:

[ZoneTransfer]
ZoneId=3

(information captured with 'more' - 'type' wouldn't show the stream's contents)

When you open a file (I assume of a limited set of types, but I can't find any configuration for it) Windows checks for the Zone.Identifier stream, and if it finds it, and it's an Internet zone, you get the attachment security dialog.

Get XP SP2 on a Torrent

54 KiB/s

Download links at sp2torrent.com. I decided not to wait for Automatic Updates to kick in, even if RC2 users will get the update first.

Since I started writing this post, it's downloaded another 13MB.

Sunday, 8 August 2004

XP SP2 *will* be distributed through Automatic Updates

This is a scary prospect - Microsoft have confirmed it.

I suggest that companies not using Software Update Services either start doing so or set Automatic Updates to advise of new downloads only. I could cope with our developers making the switch one-by-one - they can handle any errors that result - but I think we need to manage our salesmen more closely. Specifically, we need to test that SP2 works properly with our VPN software and with their new laptops.

You can configure Automatic Updates through Group Policy without having to touch every computer, if you have an Active Directory-based domain.

Ironically I'm more worried about getting the salesmen updated ASAP than the developers, because I think the salesmen are less likely to practise safe computing. But it has to be organised.

Friday, 6 August 2004

Tim Bray weighs in on Linux and patents

http://tbray.org/ongoing/When/200x/2004/08/05/LinuxPatents.

I find this comment interesting:

"Second, the Linux community would—after some pain—figure out a way to route around the litigation; it would be real work, but it would happen."

OK, Tim, but you just wrote:

"The Lesson In software, assume that everything is already patented. You can't build anything, no matter how new it is, without infringing someone's patent."

You've got a vicious circle: in any attempt to avoid this patent (which you may not be able to do anyway, since the patent covers the method of implementation, not the actual code implementation itself) you may well end up infringing others. Remember Microsoft's proposed solution to the Eolas patent: don't perform the infringing operation (automatically scripting binary controls loaded into a page).

[Edit: Fixed nasty double-encoding error, sigh]

No XP SP2 yet

...but I did just get a new version of Windows Update v5.

It looks to me as though MS is going to push SP2 through Automatic Updates. I'm not sure this is a good idea. Yes, everyone should have SP2 - but only under their control, and only when they ask for it. This isn't a small patch - the RC2 download was 200MB or so. Even though Windows Installer 3.0 allegedly supports differential binary patching, if everything has been recompiled with the latest /GS implementation, a heck of a lot will have changed.

I just hope I won't have to uninstall RC2 before installing the RTM version.

Tuesday, 3 August 2004

Odd PC behaviour? Run a memory tester

MS has written one, which can be booted from either a floppy disk (what's one of those? ) or a CD: download.

(via Dr. HardwareBlog)

It was twenty years ago today...

...well, OK, it was nine years ago some time this week, I forget exactly.

When I was at school, around Christmas 1994, I joined a band with a few friends. In August of 1995, we recorded a number of songs we'd written (mostly by David Kane, our guitarist, often with Roger Barden, the keyboard player, although I and the drummer, Chris Velvick, also have writing credits). Since David is now getting married (in Toppenish, Washington, USA - he's marrying an American girl) in September, and I have some free time () I thought I'd get the demos onto CD. The master is a standard, albeit very expensive, audio cassette - it uses different material to get very low noise.

I also converted the songs to MP3. I thought I could actually upload all of them to my Demon web space, but it's only 20MB, and I encoded them (all 4+ minutes, we were a little long-winded) at 192kbps. There's arrogance. So there's only space for one, really. I've chosen one of my favourites: Survivor.

Oh, my part? I do the vocals. That's it. While I now play a bit of guitar - and Survivor is one of the few things I can play - I couldn't at the time.

As for what happened to the band - we found that our final Upper Sixth year was a lot harder than the previous one and didn't find a lot of time for rehearsing, let alone writing and recording new material. David continued with his Music A-level; David and Roger then took a year out travelling (Japan and the Amazon) before going to University to study Music at Bangor and History at Exeter (I think) respectively, starting in 1997. Christian studied Chemistry and drinking at Edinburgh, while I went to Aston (which I've written about before).

David is now studying for a PhD at SOAS having completed his MA in Ethnomusicology and I believe is still planning to visit Bangladesh again some time in the winter. Roger is with some kind of missionary organisation in Leeds, last I heard. Christian is some kind of high-powered recruitment consultant recruiting middle managers, somewhere in the Thames Valley - I last saw him a year ago at Roger's pre-wedding meal (you can't really call that a stag night ).

And I'm muddling along as - mainly - a Pocket PC C/C++/C# developer working for 5D / Mnetics. It's a little unclear right now, actually - I answer the phone as Mnetics, but still officially work for 5D, which is now owned by Mnetics as of a couple of weeks ago.

[Update: I forgot to link to Audacity, the software I used to capture and manipulate the tracks.]