In association with heise online

23 September 2008, 16:32

Intel's new six-core CPUs highlight Windows' limitations

Liam Proven

Modern PC specifications are starting to hit some of the fundamental limits of current operating systems – both architectural and licensing-imposed.

Intel's latest CPU, the hex-core Xeon 7400 series, was announced on Monday 15th September, and Intel had Unisys to hand to help demonstrate it. Unisys sells the biggest single-system Intel x86 server around – the ES7000 model 7600R, which supports up to 16 Xeons. Fully populated with six-core chips, this super-high-end PC server now has 96 cores and 1TB of RAM. Fewer commentators have noted that this specification is a little embarrassing for Microsoft. Windows simply can not drive so many processors. The top-end Windows Server 2008 Datacentre Edition only supports 64 processor cores.

This is an entirely separate limitation from the licensing restrictions, which limit the number of actual socketed processor chips. For instance, the less exotic Enterprise Edition of Windows Server 2008 is artificially restricted to only eight sockets and the Standard and Web Server editions just four. This restriction ignores the number of cores per socket, so single, dual-, quad- or hex-cores chips don't affect licensing, but they do affect how many actual processor cores the OS can utilise. Note that the 64-core limit is a feature of the x86-64 edition of Windows Server 2008 – the 32-bit edition can only cope with 32.

Sixty-four cores are a lot by current x86 standards, but such chips do exist. Sun Microsystem's UltraSPARC T2 processor has eight cores on a single chip, each of which can run eight threads, so it appears to Solaris as a 64-core processor. The UltraSPARC T2 Plus swaps some memory controllers and a 10Gb Ethernet connection for two-way SMP, which gives a two-socket server with 128 hardware threads, and the T2's successor, the forthcoming "Victoria Falls" processor, will upgrade this to 128 threads and glueless two-socket servers with 256 hardware threads. If this wasn't enough, Sun is reportedly planning a four-socket machine, linked by a crossbar switch, which would support 256 to 512 simultaneous threads depending on processor.

Also, although it's not a general-purpose CPU, Tilera's TILE64 and TILEPro64 processors are genuine 64-core devices, with the cores connected by an on-chip mesh network. The cores run a VLIW instruction set which, like that of China's Godson processors, is derived from SGI's MIPS architecture.

Intel cannot afford to be far behind in catching up with such devices, so in a few years' time, when doubtless Windows Server 2008 will be in widespread use, its 64-processor limit will become a serious constraint. The next iteration of Windows Server will have to overcome this, but replacing a server OS with a new release is not a trivial exercise.

Such restrictions are occurring increasingly with modern high-end PCs. For instance, 32-bit PC OSs can only access a total of 4GB of RAM. That's about £45 worth at the time of writing – about £12 a gigabyte – which is not a particularly high-end price. Worse still, this includes the memory space given over to graphics cards and other I/O devices. With 512MB graphics cards now common and 1GB and greater available, the available memory on a 32-bit system can be 2GB or even less. Microsoft's fix for this in 32-bit Vista is to make the OS tell you how much RAM the PC has fitted, not how much of that it can actually use.

One answer is to switch to the x86-64 edition of Vista, which can access 128GB of RAM – or eight or 16 if you're using Home or Premium respectively. However, 64-bit Windows can't use any 32-bit drivers and will not run any 16-bit code whatsoever. It also lacks some features from the 32-bit edition. Again, migrating to 64-bit isn't straightforward.

Drivers for 3D cards are less of an issue with server OSs, such as the x86-64 Enterprise and Datacentre editions of Windows Server 2008. These can access 2TB of RAM, although Web and Standard editions are restricted to just 32GB. That's about £384 worth – about the same as the cost of the OS itself.

Another limitation that users are now starting to come across is volume size. Although NTFS filesystems can reach large sizes in theory – the limit on Windows XP is 232-1 clusters, which for 64KB clusters means just under 256TB – the size limit for individual volumes is rather lower: just 2TB. This restriction is less problematic on servers, where filesystems are likely to be formed of arrays of smaller disks, but it's becoming a problem on workstations, where 1.5TB SATA drives are now around £130. When drives of over 2TB become commonly available, the PC's 25-year old MBR partition table will have to be abandoned, probably in favour of the GUID Partition Table (GPT) scheme. This, in turn, will probably spell the death of the traditional BIOS in favour of Itanium-style EFI firmware. Apple has already made this leap - Intel-based Macintosh computers are based on EFI and GUID. Windows users are stuck unless they happen to be running on Itanium systems, which already use EFI and GPT.

These limits – 64 cores, 2TB disk volumes – are restrictions imposed by the designs of the PC and Windows operating systems. Sun's Enterprise M9000 Server runs Solaris on 256 CPU cores and the standard Linux kernel can cope with 255. Such large numbers of cores are currently only found on systems composed of multiple blades or clustered servers, but they do exist – in 2007 NASA built a single-system-image Linux supercomputer on SGI Altix hardware with 1,024 dual-core Itanium 2 processors and 4GB of RAM. Scaling to such levels is currently very inefficient; Linux starts to struggle above 64 processors.

For now, except for special-purpose systems such as cluster-based supercomputers, the only answer is virtualisation: to use these large-scale machines to run multiple virtual machines of limited resources, each of which performs certain specific tasks. To escape Windows' limitations, though, requires using a different host OS with higher limitations. To dedicate four hardware processor cores to each of twenty quad-processor virtual machines means the host OS must manage at least 81 processors, which Windows cannot yet do.

This sounds like a ridiculous specification, but it isn't. If thin clients are used to access Windows desktops running in VMs on a host server, hundreds of VMs does not seem that many. Limitations on hard disk size will strike sooner still, and the 32-bit 4GB memory ceiling is affecting lots of people today.

Some of these limits, such as the number of CPUs, have good reasons behind them – the problems of parallelising code across many cores are extremely difficult. Some, such as disk size limitations, are because of the design legacy of IBM PC-compatible computers. And some, such as memory size limits, are either imposed by use of 32-bit code, as in the 4GB limit, or simple Microsoft licensing restrictions, in the case of the maximum permitted memory ceilings on x86-64 editions of Windows.

For now, these problems affect very few users – but by the start of the next decade, they will cause pain for millions.

Print Version | Permalink: http://h-online.com/-743393
  • Twitter
  • Facebook
  • submit to slashdot
  • StumbleUpon
  • submit to reddit
 


  • July's Community Calendar





The H Open

The H Security

The H Developer

The H Internet Toolkit