Intel pre-launched its Sandy Bridge Xeon E3-1200 series of processors on Tuesday, letting the world know that it intends to dominate the new micro server market being created by SeaMicro, Dell, Tyan, Calxeda, and others.
SeaMicro is making a lot of noise about its Atom-based SM10000 machines, which cram 512 cores into a 10U chassis, and Tyan and Dell are already offering single-socket, Xeon-based micro servers that slide into rack enclosures more densely – and more cheaply – than commercial blade servers.
Calxeda, which is building ARM-based micro servers, said yesterday that it would be able to put 120 servers, with a total of 480 ARM cores, into a 2U chassis using a micro server design that includes an on-chip fabric interconnect.
None of these machines is a volume product yet, and none is suitable for all workloads. But Intel wants everyone to know that the impending Xeon E3-1200 series chips, as well as next year's Xeon and Atom processors, are a perfect fit for micro servers.
That said, the Xeon E3-1200 chips and the Cougar Point C202 and C204 chipsets previewed today are not just for micro servers. They can be used in any single-socket server, be it a rack, tower, blade, or micro rack/blade hybrid machine.
These forthcoming Xeon chips come out of Intel's Data Center Group, which designs and makes server and workstation processors, chipsets, and networking chips. They are only available for servers, according to an Intel spokesperson. Single-socket workstations will apparently use different parts, likely because they'll have embedded graphics processors unlike the server variants of the Xeon E3-1200s previewed today. The workstation chips will probably also offer customers the option of having more cores and discrete, external graphics cards, too.
There are seven Xeon E3-1200 chips, six with four cores and one with only two cores. There is one high-end 95 watt part and four 80 watt parts. Two of the chips, which sport the L designation, are low-voltage parts probably of most interest to those interested in micro servers. The two-core version, the E3-1220L, spins at 2.2GHz and only dissipates 20 watts using Intel's thermal design point (TDP) metric for gauging power consumption and heat dissipation. The four core version, the E3-1260L, runs at 2.4GHz and warms up to 45 watts.
All of the new chips support the second generation of Intel's Turbo Boost technology, which lets a core's clock run faster if the other cores are not busy doing too much work. All of the chips have two DDR3 memory channels and support four memory slots for a maximum of 32GB of main memory – which is fine for a single-socket server, micro or otherwise. Memory chips only run at one speed, 1.33GHz.
All of the chips but the E31220 have HyperThreading support as well, which virtualizes each core such that the operating system or hypervisor running atop of the chip sees two instruction streams for every core, helping each chip get more work done on multithreaded jobs.
In a conference call Tuesday with analysts and journalists, Boyd Davis, general manager of marketing at the Data Center Group, revealed neither the chips' pricing nor a precise launch date, but did say they would be out in "the next few weeks." The official launch will likely be just ahead of or during the Intel Developer Forum event in Beijing on April 12 and 13 – but Davis wouldn't say. All of the chips, including the 20 watt and 45 watt parts, are in production now.
In the second half of this year, Intel will kick out another Sandy Bridge part that will be rated at 15 watts, and the company is working on a variant of the Atom processor that has 64-bit addressing, VT-x virtualization electronics, and ECC memory scrubbing that will launch in 2012 and that will deliver sub-10 watt thermals.
As far as Intel is concerned, micro servers are just one of a number of "density optimized" machines for which it needs to create chips: half-height blade servers, half-width rack server nodes, and micro servers that cram a dozen or two servers into a chassis that provides shared power and cooling for the nodes. In a sense, these micro server enclosures are like tiny rack servers in their own right, extracting the shared components out of the machines for more power efficiency and density.
In this sense, they are merely rack servers done right.
Intel understands that people are excited about micro servers using Atom, ARM, and other processors, but Davis says it is important not to get carried away because "wimpy nodes" are not for everyone.
"We are pretty excited about the micro server category for very specific workloads," explained Davis, "but 97 per cent of the volume we sell to cloud service providers today are for two-socket servers." And these tend to use regular Xeon EP processors.
That said, Intel thinks that micro servers will find a home at many companies, and could account for as much as 10 per cent of the server-chip opportunity for Intel in the next four to five years. These machines, explained Davis, are good for basic content delivery, lightweight Web serving (particularly for static content), basic email and online application serving, and low-end dedicated hosting where companies still want a whole physical server to themselves.
Intel also trotted out Gio Coglitore, director of Facebook Labs, the arm of the social media giant that tests out future code on iron to see what kinds of servers it needs, who made a case for what he called "realization" of servers – that's as opposed to virtualization.
While Coglitore would not talk specifically about the server and network topology of the Facebook workloads, he said that Facebook had big back-end databases, memcached servers front-ending these databases, and then Web servers fielding-up pages. Because of the way the company has coded its applications, they do not lend themselves to running atop hypervisors, Coglitore said, and added that while Facebook has not yet deployed micro servers, it had tested them at the node level and said that it was possible that it might deploy micro server machines in late 2011 or 2012.
And for those who think that Facebook workloads might work well on 32-bit architectures, Coglitore is having none of that. "For us, 64-bit is crucial because we are not going to port our code down to 32-bits," Coglitore said.
Coglitore also said that adding lots of memory to servers was important for Facebook's performance, so 32-bit machines' 4GB limit would be inappropriate, whether they are Atom or ARM chips. The Cortex-A15 ARM chip, by the way, will have a funky 40-bit memory addressing scheme that may help, but it's not clear when ARM Holdings will push up to 64-bits with its designs.
Instead of doing load-balancing within a data center atop hypervisors, Facebook does load-balancing across nodes in a data center, which have redundant data. It then, if need be, does failover of physical machines across data centers. With this already working, and running as lean and mean as Facebook can make it, it's no wonder that the company just wants some compact servers that burn as little juice as possible to run the simpler parts of its workloads.
Facebook is a big buyer of bespoke servers made by Dell's Data Center Solutions unit, and it seems likely that if anyone wins the micro server contract at Facebook, it will be Del with a variant of the Viking chassis and Dragon servers that the company was showing off last September.
Intel doesn't want to step on the enthusiasm some are showing for the Atom processor, but at the same time the company still thinks that most micro servers will use Xeon E3s, not Atoms. "We're super excited about what SeaMicro is doing with Atom, but we think we can do better," Davis said.
SeaMicro has been very clear that it will use whatever chip that its customers want in its SM10000 server designs, and has said that it can plunk in Xeon, Opteron, or ARM processors into its boards and hook them into the network fabric it created for the SM10000, no problem. It would not be at all surprising to see a Xeon E3 variant of the box soon, and well ahead of next year's ECC-enabled Atom processor.
Educational institutions and social networks are the worst when it comes to leaving their Web sites exposed to known vulnerabilities, with health care and banks doing the best, according to a study by WhiteHat Security.
According to its 11th annual Web Site Security Statistics Report, 71% of schools have unpatched software vulnerabilities on their Web servers all the time, while 58% of social networking sites always have such vulnerabilities. By contrast, 14% of health care organizations and 16% of banks have unpatched vulnerabilities all the time. The average for all business sectors was 44%.
IN DEPTH: What do security auditors really think?
Banks also showed well in the percentage that had vulnerabilities less than 30 days per year, with a measure of 51%. Financial services was No. 2 with 22%, the report says. The average was 16%.
WhiteHat's data was drawn from 400 businesses who outsource Web site vulnerability management to the firm.
Banks did well in the overall number of vulnerabilities they had during the year, with an average of 30. The average for all business sectors was 230. Retail stores faced the highest number of vulnerabilities with 404, WhiteHat says.
"While no industry approached anywhere near zero for an annual average, banking, health care and manufacturing performed the best out of all the industries with 30, 33 and 35 serious vulnerabilities respectively per Web site during 2010 for a rough average of 2.5 or so vulnerabilities per month," the WhiteHat report says. "On the opposite end of the spectrum, Retail, Financial Services and Telecommunications, whose Web sites had the most reported issues, measured 404, 266 and 215 serious vulnerabilities per site -- or between 18 and 34 per month."
Simply being exposed doesn't accurately indicate the likelihood a site will suffer an attack, the report says. Some types of vulnerabilities appear more often. For example, the chances that information leakage and cross-site scripting vulnerabilities show up on a Web site are 64%; the chances for content spoofing are No. 3 with 43%, the report says.
The other seven vulnerabilities in the top 10, in order, are cross-site request forgery, brute force, insufficient authorization, predictable resource location, SQL injection, session fixation and abuse of functionality, WhiteHat says.
The time it takes to fix vulnerabilities once they are identified is a key measure of site security, WhiteHat says. Banking does best there, with half of its vulnerabilities remediated within 13 days. Telecommunications sites are the worst, with it taking 205 days to remediate half of its Web site vulnerabilities, the report says. The average across all businesses is 116 days.
"From a risk management perspective, if the organization is a target of opportunity, perhaps a goal of being at or above average is good enough," the report says. "If, however, the organization is a target of choice, either ASAP or being among the fastest is more appropriate."
When Biogen Idec considered a move to the cloud, cost savings was not the primary concern. For a biotechnology company that lives and dies by its research division, the ability to quickly spin up computer resources for its scientists was far more important.
A pioneer in treatments for multiple sclerosis, Biogen Idec needed to quickly assign computing resources to support its researchers. Yet, provisioning servers and applications to new projects requires a lot of planning, effort and support, says William Hayes, director of IT for the R&D section's decision support group.
"One of the things that was a challenge for us is to get servers deployed so we can use them," Hayes says. "It takes anywhere from weeks, for virtual servers, to months, for physical servers."
[For timely cloud computing news and expert analysis, see CIO.com's Cloud Computing Drilldown section. ]
The company's foray into cloud promises to change that, he says. Using an enterprise cloud gateway from CloudSwitch, the company can securely allocate new servers within a few minutes and at half the cost of using internal infrastructure, says Hayes.
1. First Wins Will Be Quick
In fact, the company has reduced the time to create a new server to less than ten minutes, he says. An IT manager can log into a Web site and create a Red Hat server or an Ubuntu server within a few minutes. Because the research groups have such disparate demands, the flexibility of quickly creating instances of different types of servers is a huge benefit, Hayes says.
"We tend to need a lot of throwaway servers of different sources, and in a lot of cases we need nonstandard servers," Hayes says.
The need for quickly-provisioned resources is common among companies that focus on R&D, says Ellen Rubin, founder and vice president of products at CloudSwitch, a startup that focuses on easing access to cloud resources, especially for companies with legacy apps.
"These companies often can't get the physical IT resources quickly enough," she says. "Cloud is an inherently attractive thing for these companies."
2. Paying for Cloud Can Be Tricky
One downside to many cloud services, such as Amazon's EC2, it their relatively inflexibility in terms of payment. It may seem odd, but many companies do not allow recurring payments through credit-card accounts.
Like many firms, Biogen Idec works on a purchase order basis; the IT department does not have a credit card that they can use to pay large recurring expenses. That created a problem when signing up for an account with Amazon's Elastic Computing Cloud service.
"You have to be very creative in how you pay Amazon," Biogen Idec's Hayes says. "We have actually contracted with an outside firm to pay Amazon and then bill us."
The lesson for many IT managers is that, while cloud technology highlights advances in delivering affordable and flexible computing to companies, many companies internal processes are much slower to advance. Getting accounting departments to change their policies to handle computing resources as operational expenses, rather than capital expenditures, will take a while, he says.
3. Per-Server Cost Savings Add Up
While Biogen Idec was not searching for a way to reduce the cost of provisioning researchers with computing resources, the savings helped sell Hayes on the benefits of cloud computing. Moreover, when implementing well-tested services to production servers, the cost savings mattered more than agility, he says.
While Hayes would not discuss the costs of Amazon's service and the licensing costs of CloudSwitch, the cost for a fully-provisioned virtual server in the cloud was less than half that of a physical server over a period of three years. In addition, the ability to only pay for a server during work hours, as opposed to paying operational costs every day, reduces the expense even more, he says.
"I'm still astounded by how cheap this is," he says.
4. Protect Against Accidental Shutdown
Yet, companies that put critical data in the cloud should beware that they could be setting themselves up for a serious business disaster. While denial-of-service attacks--such the revenge attacks inspired by the controversy surrounding Wikileaks--are a major concern, simply failing to pay your bill could result in being disconnected from critical data, says Hayes.
"The cloud computing services, if you don't pay the bill, they will shut you down," Hayes says. "It is kind of hard to explain to your company that because finance could not pay the bill on time that you have a lot of interesting personal computers sitting on people's desktops."
The lesson for CIOs, Hayes says, is whether a company's infrastructure depends on cloud computing or co-location facilities, the firm that manages your information technology controls your servers.
5. Security Not All Good or Bad
One reason that Biogen Idec chose to use CloudSwitch to manage its cloud infrastructure is because the firm worried about putting its research data on servers rubbing shoulders with other servers in Amazon's cloud. CloudSwitch adds a middleware layer that encrypts all data that travels outside a company's network and gives managers a single view of their resources, both from internal networks and from the cloud.
While placing data outside the corporate firewall makes any IT security manger nervous, Hayes says that Biogen Idec's security group is "fairly comfortable" with the technology.
"You don't get a 'this is good' or 'this is bad,'" he says. "You get a degree of goodness and badness."
Germany's federal finance ministry has pulled its website offline after receiving notification of a serious security problem from white hat hackers affiliated to the Chaos Computer Club (CCC).
Flaws on the the Federal Finance Agency website reportedly created a means to spy on customers of the government agency, steal login credentials or run phishing attacks. The bug reportedly existed for months before CCC stumbled upon the flaw. It is unclear whether or not the vulnerability was ever exploited or used as part of any scam.
The agency – Deutsche Finanzagentur – is involved in the placement of federal borrowing as well as the managing of federal debt. It also provides an entry point for internet banking services provided by bundeswertpapiere.de.
Flaws in the configuration of the web server used by the agency created a means to mount hard-to-detect phishing attacks, according to an advisory (in German) on the breach published by CCC over the weekend.
A notice on the Deutsche Finanzagentur said that the site was temporarily unavailable without providing any indication on when services might be restored.
The PRIMERGY RX200 server gains pole position in VMware’s latest industry benchmark
We are pleased to announce that our PRIMERGY RX200 S6 rack server holds pole position in VMware’s new VMmark V2.0 industry benchmark, (which is now extended to measure servers on both performance and scalability for applications running in virtualized environments in a multi-host virtual environment).
The world-record holding RX300 S6 tops TCP-E price performance benchmarks as Fujitsu claims top 4 spots
PRIMERGY RX rack servers have retained all three top slots in TPC-E price-performance tests for more than two months, making Fujitsu the first vendor to sustain triple top results in two and a half years. The RX300 S6 which is currently ranked 1st, holds the TPC-E benchmark world record for best price/performance in online transactional database processing (OLTP).
The highly acclaimed ETERNUS CS800 is the leading solution for backup and archiving: it’s faster, more scalable and offers better price/performance than its key competitors. Here are 10 questions to help you identify an ETERNUS CS800 opportunity:
Are your backups taking an excessive amount of time?
Do you have difficulty finding data when you need to restore lost files or systems?
Are you happy with the quality of tapes that your backups are stored on?
Do you have email performance problems? (This is often caused by poor email archiving, for which CS800 and software archiving is an excellent solution)
Do you regularly archive data? If so, how quickly and easily can you retrieve it?
Can you easily consolidate backups from different sites?
Does your backup automatically replicate data to your DR site?
Is your backup solution highly scalable to meet your growing data requirements?
Would you be interested in a backup and archive solution that typically pays for itself in 12-15 months?
So help yourself and your customers to get their backup and archiving running smoothly and reliably with the ETERNUS CS800
Please click here to see our excellent video material on YouTube!
DR case Studies - see & read why you should choose StorageCraft with Discus
Live Disaster Recovery Seminar - See a StorageCraft live seminar and find out why more and more IT Service Providers are choosing ShadowProtect for their disaster recovery and business continuity solutions with features including:
Onsite server failover in 5 minutes using VirtualBoot™
Near-instant offsite recovery with HeadStart Restore™ - NEW: featuring VMWare ESX and ESXi compatibility
How to recover and migrate P2P, P2V, V2V & V2P with Hardware Independent Restore™.
StorageCraft Technology Corporation is a premier backup and disaster recovery software company. StorageCraft focuses on providing innovative disk-based backup, disaster recovery, system migration, data protection, Business continuity and security solutions. StorageCraft delivers software products that reduce downtime, improve security and stability for systems and data and lower the total cost of ownership for servers, desktops and laptops. Their history dates to 1999, when StorageCraft was developing technology that is currently installed on millions of systems around the world. In 2004, they became StorageCraft Technology Corporation and began developing enterprise solutions, incorporating our own core technology and high standards of innovation into our own world-class software.
IBM has stolen the server crown back from rival Hewlett-Packard.
For the past five decades, Big Blue was the top systems seller based on revenues. But then along came the Great Recession, hitting at the same time as transitions in the Power Systems and System z product lines, allowing HP to reach down and steal the crown away. But now it's back.
So how long will IBM be able to hold it this time?
The box counters at Gartner reckon that server buyers bought 2.38 million servers in the fourth quarter, a 6.5 per cent boost in boxes compared to the number that went out the door in the final quarter of 2009. Because of a rebound in heavier server configurations – driven in part by fatter x64-based machines setup for server virtualization and surging mainframe sales in the wake of the launch of the System zEnterprise 196 machines last July and their shipment in late September – aggregate server revenues were up by 16.4 per cent, to $14.68bn.
For the full 2010 year, server makers pushed out 8.84 million machines, an increase of 16.8 per cent over 2009, but revenues hit only $48.8bn, rising only 13.2 per cent because of a mainframe slump in the first three quarters of 2010 and an ongoing slump in RISC/Itanium Unix system sales that started ahead of the Recession.
"2010 was a year that saw pent-up x86-based server demand produce some significant growth on a worldwide level," explained Jeffrey Hewitt, research vice president at Gartner, in a statement accompanying the server market stats. "The introduction of new processors from Intel and AMD toward the end of 2009 helped fuel a pretty significant replacement cycle of servers that had been maintained in place during the economic downturn in 2009."
The growth in server spending was highest in North America, rising 24.5 per cent year-on-year, followed by the Asia/Pacific region, with 22.4 per cent growth. Latin America had 12.3 per cent growth in aggregate server spending compared to 2009, while Europe, the Middle East, and Africa had a less stunning 10.4 per cent growth rate. Japan, which has never successfully got out from under its own economic collapse in the early 1990s, actually saw a 4.4 per cent decline in server spending in 2010.
In the fourth quarter, IBM shipped only 332,254 boxes (up 3.8 per cent), according to Gartner, but the higher prices that Big Blue commands for its mainframes helped boost revenues to $5.21bn (up 26.4 per cent). I doubt IBM or anyone else thinks the mainframe boom can be sustained at this level, but it probably feels pretty good when it happens. HP was the top system shipper, with 767,026 boxes going out its factory doors (up 6.9 per cent and slightly above the market average), but because HP peddles less expensive x64 iron for the most part, the company's revenues only hit $4.46bn. That said, HP still grew sales by 12.8 per cent, which is not bad for a commodity server business.
However, Dell did much better on a pure commodity x64 server play, with shipments up 6.3 per cent, to 515,274 boxes, in Q4 2010 but with revenues growing right along with IBM's 26.4 per cent pace to hit $1.92bn. HP lost a point of revenue market share and Dell caught it, basically. IBM gained nearly three points of revenue share, and will likely gain share in the first quarter and maybe the second quarter of 2011 as well. But at some point, pent up demand for mainframes will run out and Big Blue will have to start doing deals, driving down revenues per machine.
Oracle continues to bleed like a stuck pig in the server biz. Gartner believes that Oracle's server business declined by 16.2 per cent in the fourth quarter of 2010, to $805.6m, when everyone else was doing swimmingly. Oracle's box count fell by 40.8 per cent, to a mere 36,614 machines. No matter how big the pipeline might be for Exadata, Exalogic, and SuperCluster appliances, this pipeline is not resulting in a rebound in Oracle's server business.
Oracle said it was going to walk away from unprofitable server deals, and presumably it is making money on the much-diminished Sun Fire and Sparc Enterprise M servers it does sell. The fourth calendar quarter of 2009 was the last one that Sun Microsystems did as a free-standing company, so it is not exactly a fair comparison. But the first quarter of 2011 sure will be, and Oracle had better show some growth this year or co-founder Larry Ellison will be taking questions about how Oracle has failed at the server business.
Oh, wait. I forgot. Oracle doesn't take questions. It just makes pronouncements. And money, of course.
Oracle's partner in the server racket, Fujitsu, rounded out the top five, with $560.4m in sales (down a half point) against 75,716 shipments (up 12.4 per cent and growing twice as fast as the market at large in terms of shipments).
Other vendors - including Cray, Silicon Graphics, Super Micro, and a host of whitebox vendors - managed to wrest $1.72bn in revenues away from the top five, almost matching the overall market with 15.8 per cent revenue growth. These other companies shipped an aggregate of 654,544 boxes, up 12.1 per cent from the year-ago period.
Gartner believes that a paltry 55,249 RISC/Itanium machines were sold in the fourth quarter of 2010, a decline of 10.7 per cent. But these machines generated $2.97bn in revenues, down nine-tenths of a point. The box counter believes that IBM's Power Systems running AIX had a 10 per cent revenue bump, hitting 1.33bn, compared to HP's Unix system sales of $829.2m (down 5.4 per cent), Oracle's $637.3m (down 15.5 per cent), and Fujitsu's $77.4m (up 20.3 per cent). French server maker Bull, which resells IBM Power Systems gear, had $67.7m in sales, down 11.9 per cent. Other Unix system makers accounted for $24.2m, double from a year ago.
X64-based iron continued to be the revenue and shipment driver in the server space, 2.32 million boxes generating $9.11bn in revenues. Shipments for x64 boxes rose by only 7.1 per cent in the quarter, but revenues rose by 20 per cent as companies fatten up their boxes for virtualization. HP maintained its top spot in the X64 space, with 754,503 boxes driving $3.43bn in sales.
Dell's business is all x64 iron, and its stats gave it a solid number two position here but it is still nowhere near to catching HP - even with all the units that its Data Center Solutions bespoke server division is kicking out for hyperscale data center customers. IBM's x64 server shipments stalled in Q4, but its memory-heavy System x and BladeCenter machines seemed to do well, helping push its x64 server revenues up 18.1 per cent to $1.69bn
It is not an easy time to be a Unix server vendor, but at least it has stopped getting harder.
According to the latest statistics from the box counters at IDC, worldwide sales of Unix servers was flat as a pancake, growing four-tenths of a percent to hit $3.8bn in revenues. That drops Unix from about half of revenues a decade ago to about a quarter of the money pie here in the 2010s. Yes, a lot of servers ship with no operating system, and IDC does guesstimates to figure out what OSes are the primary ones on the boxes. So there is a bit of witchcraft in the numbers. But for Unix and mainframe machines, you kind of know what buyers are plunking on the boxes.
By contrast, IDC figures that sales of mainframes shot up by 69.1 per cent in the fourth quarter, to hit $1.7bn. That was the highest growth spike that IDC has ever recorded for a mainframes. Servers with Linux as their primary operating system saw a 29.3 per cent jump in revenues, hitting $2.5bn and representing more than 450,000 units shipped. Windows was the primary operating system on 1.5 million boxes, according to IDC, and these machines generated $6.3bn in revenues, up 16.8 percent compared to the year ago quarter. That shipment level for Windows boxes is the highest in the history of the platform, too.
(Note: IDC's server figures include the processors, chassis, memory, base disk, and I/O features as well as a core operating system, sold directly by vendors or indirectly through the channel, but with the dollars reckoned at the factory level. So this means you see the machines vendors sold into the channel during Q4, not necessarily the machines that the channel resellers sold to customers in that same period. Sometimes, customers are getting stuff passed through quickly, and sometimes they are buying older inventory.)
When you add it all up, the overall server market had revenue growth of 15.3 per cent, hitting $14.96bn, in the fourth quarter. Shipments rose by 6.1 per cent, to a total of 2.06 million boxes. For the full year, IDC says server revenues worldwide were up 11.4 per cent, to $48.1bn.
For some bizarre reason, Hewlett-Packard declared in its press release covering the IDC numbers that it was the leader in terms of worldwide server revenue and shipments for the year, but if you look at the data, IBM beat HP resoundingly in revenues in the four quarter, $5.59bn versus $4.47bn, and for the full year IBM's $15.3bn was actually $32m bigger than HP's $15.3bn. (The difference is rounded out at three significant digits, but server makers fight over those scraps of millions.) IBM grew server sales by 21.9 per cent in Q4, while HP grew at only 13.2 per cent and a bit slower than the market at large. For the year, IBM had only 8.5 per cent growth, however, while HP did 18.9 per cent.
Dell posted $1.89bn in server revenues in Q4, up 26.8 per cent, and crested just above $7bn for all of 2010, with an impressive 34.2 per cent spike. IDC still calls it Sun in its tables, but it is really Oracle, and Q4 was not a particularly good one for the software giant and server upstart. Oracle's server sales were off 14.4 per cent in Q4, to $883m, and for the full year sales were $3.28bn, down 14 per cent.
By the second quarter of 2011, the compares should get easier for Oracle but the real question is when the company will see server revenues actually go up. To its credit, Oracle seems to be turning a profit on the Sun business, something Sun itself had not been able to do reliably for the better part of a decade.
Fujitsu pushed $541m in iron in Q4, down 9.4 per cent, and had sales of $2.19bn for all of last year, flat compared to 2009. Other vendors made up $1.59bn in revenues in the fourth quarter of 2010, up 20.1 per cent and helped by improving sales at Silicon Graphics, Cray, Super Micro and a slew of whitebox vendors. For the full 2010 year, other server makers didn't do as well, only growing sales by a half-point to just under $5bn and about a tenth of the overall market.
The x64 processors from Intel and Advanced Micro Devices drove the bulk of server sales in the final quarter of 2010, just as they have been doing for the better part of a decade. Vendors shipped 2 million boxes using Xeon or Opteron processors, an increase of 6.7 per cent year-on-year. Revenues for Xeon and Opteron boxes rose by 21.4 per cent in the quarter, hitting $9bn, helped by fatter two-socket boxes and a smattering of absolutely obese four-socket and eight-socket machines. For the year, x64-based server sales were up 28.7 per cent, to $30.6bn, with units up 16.6 per cent, hitting 7.4 million machines.
Jed Scaramella, research manager of IDC's enterprise platforms and data center trends group, told El Reg that Opteron-based machines accounted for about 7 per cent of shipments in Q4, down a smidgen year-on-year but up a bit sequentially. Intel, of course, had the other 93 per cent.
So what is 2011 going to look like? IDC is cautiously optimistic, as you might expect. "I think that the server refresh has worked through most of the pent-up demand left over from the economic downturn," says Scaramella. "But the market is still going to grow. I think we'll see another good quarter for non-x86 systems, but after a spike, they usually fall off to a long tail."
Cisco is upping the unified communications stakes with the launch of Cisco Jabber, which will bring presence, instant messaging, voice and video, voice messaging, desktop sharing and conferencing to the device of your choice.
While Mac users will have to wait until the summer, Jabber is available today or in development for Windows, iPhone, iPad, Nokia, Android and BlackBerry platforms. The application also integrates with Cisco video endpoints including Unified IP Phones, WebEx MeetingCenter and TelePresence connections.
Prior to acquiring Jabber in late 2008, Cisco had been working with the real-time messaging company to integrate its Unified MeetingPlace conferencing product with Jabber Extensible Communications Platform (Jabber XCP). This week's announcement means users can collaborate from any workspace and any device, says Cisco. It's not just about the PC anymore, Cisco adds.
Based on the Extensible Messaging and Presence Protocol (XMPP) for presence and IM, Jabber enables users to interact with others regardless of whether they're using applications from Google, IBM, Microsoft or AOL. In addition, the integration with Microsoft Office enables users to see a colleague's availability status and escalate communications to an instant message, phone call or conference from within the application, says Cisco.
In addition to the Jabber announcement, the company also unveiled two unified communications solutions for small and midsized businesses. The Cisco Unified Communications 300 Series for two to 24 users includes data and wireless support, along with features such as voicemail and automated attendant.
Its bigger brother, the Cisco Unified Communications Manager Business Edition 3000, supports 300 users across 10 sites. The 320W is currently available, listing at $995, while the 3000 is scheduled to be available in the second quarter, at $12,400 for 100 users.
According to the most recent Gartner Magic Quadrant for Unified Communications (Oct. 15, 2010), Microsoft, Cisco and Avaya (in that order) were the only three in the Leaders quadrant, down from six vendors last year. However, it cautions that despite the emergence of complete UC portfolios, these products are still in an early stage, and no vendor product adequately addresses all of an enterprise's UC needs.
While analyst Ted Schadler, Forrester, agrees that UC adoption is still a work in progress, he believes Cisco Jabber is a solid branding decision because it unifies Cisco's real-time collaboration assets under a single product line. "But it's also a nice extension to Cisco's unified communications [meaning voice and video conferencing] products. Finally, having a click-to-conference and a soft phone/video client will let Cisco compete directly with Microsoft Lync and IBM Sametime."