Data Center Wars Chicago – The Lines Have Blurred

Prevailing wisdom would have you believe that there are two distinct data center markets in Chicago; city vs. suburbs.  While it may be easier to divide the market geographically, it does little to inform data center users of any distinct differences in the market place.   As the lines between retail colocation, wholesale data centers and managed service/cloud providers continue to blur it becomes ever more critical that end users understand these subtle differences and their long term influences.

Legacy Wholesale Providers

New competitors along with the changing focus of existing data centers will have far greater impact on the Chicago market in 2015 than simple geography.   On the wholesale front, planned expansions by DuPont Fabros in Elk Grove Village and Digital Realty in Franklin Park are underway.  Recent activity by both companies has shown increased emphasis on pursuing smaller colocation opportunities that fall outside of their traditional areas of expertise.   New wholesale competitors in the Chicago area already working under a similar hybrid approach include both QTS with their recent acquisition of the former Sun Times plant and Byte Grid acquiring the former CNA Insurance facility in Aurora.

A common theme among most of the wholesale companies listed above has been a slow move towards vertical integration.  In addition to wholesale/retail integration, network solutions and cloud connectivity options have also become quite prevalent among the traditional wholesale providers.  This is a trend that we first witnessed in the retail space with Equinix offering of Financial Exchange and Equinix Exchange services.  Equinix has since expanded offerings into areas as diverse as CDN and WAN Optimization.   Expect to see this trend accelerate as a result of acquisition and consolidation within the industry (see Latisys/Zayo) with the possibility of data center providers starting to look more like legacy telecommunications companies.

Legacy Retail Providers

Among local Chicago area retail providers, expansions of existing facilities have also been on the rise.  Latisys very successful Oak Brook facility has initiated yet another 25,000 sq. ft., 3.6 megawatt expansion.  Continuum Data Centers has also pursued expansion through their recent move from their Lombard location to their new recently upgraded, 80,000 sq, ft, facility in West Chicago.  Altered Scale in downtown Chicago has also recently completed extensive upgrades to both infrastructure and client amenities to attract a wider cross section of end users.    Additionally, we have heard of a rumored expansion of the Savvis/Century Link data center in Elk Grove Village, IL.

Hybrid Providers

Probably the most interesting newcomer to the Chicago market has to be long time engineering and IT Consulting firm Forsythe Technology.  With their new 221,000 sq. ft. Elk Grove Village data center offering private 1,000 square foot client suites, Forsythe has put a new spin on wholesale/retail hybridization.   The new development will be worth watching due largely to Forsythe’s unique approach of offering consulting services to support everything from move logistics to hardware and software installation and maintenance.   While this does appear to be the most vertically integrated data center solution we’ve seen in the area, there are still many questions surrounding this unique approach.

If you are in need of a more detailed view into the Chicago data center market, please contact us directly.  Todd and Kevin can be contacted via e-mail at tsmith@kiameshaglobal.com and kknight@kiameshaglobal.com.

 

About the Authors:
Todd Smith and Kevin Knight specialize in the data center facility market working for the technology advisory firm Kiamesha Global (www.kiameshaglobal.com).  If your organization is considering the potential benefits of a data center relocation, expansion or simply want to better understand your options in the data center marketplace, Todd and Kevin can be contacted via e-mail at tsmith@kiameshaglobal.com and kknight@kiameshaglobal.com.  Even if you have no changes under consideration for the New Year, they would welcome the opportunity to provide you with an assessment of the current market value of your existing data center portfolio’s in-house and/or collocated facility assets in order for you to better recognize where you stand in this rapidly evolving market.

A Data Center Migration Success Story

The Challenge

Houston based Cabot Oil & Gas, a publicly traded company (COG) that produced nearly 2 billion in revenue this past year and one of the Fortune Top 100 fastest growing companies for Amara Darboh Jersey 2014 recently turned to Kiamesha Global for assistance related to their core production data center located within their headquarters building.  With Cabot’s rapid growth, the need for higher resiliency for its core business applications has increased.  Housing critical infrastructure running these applications in a high quality data center facility designed to meet high uptime requirements became a necessity.

The Solution

To address the data center requirement, advisors from Kiamesha Global provided the Cabot team with the following Advisory and Agency services:

  • Total Cost of Ownership Analysis of In-House Data Center Operations
  • Market Intelligence on Houston Area Data Center Colocation Providers
  • Sizing of Wide Area Network Services Amara Darboh Authentic Jersey Required to Operate the Data Center Off-Site
  • Procurement Management of Data Center Colocation and Network Services
  • Technical & Physical Migration Support

The Results

Norbert Burch, Technical Manager for Cabot was pleased with the results of the project.

“Kiamesha Global was instrumental in Amara Darboh Womens Jersey assisting Cabot in our recent datacenter move.  We had a four month window of opportunity to make the move.  They helped us do a cost analysis, select the best fit datacenter, negotiate contracts Amara Darboh Youth Jersey and put together an excellent team all in a very short amount of time.  The datacenter move was successfully completed within the planned outage window.  We have been running at the new datacenter for almost 3 months.  We are very satisfied with the whole project and look forward to doing business with Kiamesha Global in Amara Darboh Kids Jersey the future.”

 

 

 

The Cores of What’s Next

This article was written by Steve Carl, Sr. Manager, Global Data Centers, Server Provisioning at BMC Software.

In my last Green IT post I looked at the Green / Power side of CPUs and Cores. Here I want to open that up, and have a look around.

Framing this thought experiment is the idea that we are running out of road with Moore’s Observation.

What the Observation Really Is

It is worth noting here that what Moore observed was not that things would go twice as fast every two years or that things would cost half as much every two years. That sort of happened as a side effect, but the real nut of it was that that the number of transistors in an integrated circuit doubles approximately every two years.

Just because the transistors doubled does not mean its twice as fast. Not any more than a 1 Ghz chip from one place is half as fast as a 2 Ghz chip from a different place, because it all depends. Double the transistors only means it is twice as complex. Probably twice as big, if the fab size stays the same.

Since the Observation was made in 1965, doubling what an IC had back then was not the same order of magnitude as doubling it now. IBM’s Power 7, which came out in 2010 has 1.2 Billion transistors. It is made using 45 nanometer lithography. Three years on, the Power 8 is using 22 Nanometer lithography and the 12 core version has 4.2 billion transistors.

To stay on that arc, the Power 9 would have to be on 11 nanometer lithography, and have over eight billion transistors. However, from what I have read, IBM and Intel’s next step down for server processors is 14 nanometer, not 11.  It may not seem like a big difference, but when you are talking about billionths of a meter, you are talking about creating and manipulating things the size of a SMALL virus. We are in the wavelength of X-Rays here.

A silicon atom is about .2 nanometers. We are not too many halve-ings away from trying to build pathways the size of 1 atom wide, and quantum mechanics is a real bear to deal with at that scale. Personally, I don’t even try.

So we’ll do other things. We’ll start making them taller, with more layers. The die will get bigger. To get more cores in a socket will mean the socket will get physically larger… up to a point. That point is the balance between heat removal at the atomic scale and power. Seen a heat sink on a 220 watt socket lately? They are huge.

moores-law

The Design, the Cost, the Chips to Fall

Ok. So making chips is going to get harder. Who can afford to invest the time and effort to build the tooling and the process to make these tiny, hot little things?

Over the last 10 or 15 years we have watched the vendors fall. After kicking Intel’s tush around the X86 market place by creating the AMD64 chips, and thereby dooming the Itanium, AMD ended up divesting themselves of their chip fabrication plants and created Global Foundries in the process.

Before that, HP had decided it was not anything they wanted to be doing anymore, and made plans to dump the Alpha they had acquired from Digital via Compaq. They also decided to stop making the PA RISC line, and instead migrate to the doomed Itanium. To be fair, they didn’t know what AMD was going to do to that design. But there is a reason the Itanium’s nickname was the Itanic, and actually it has lasted a while longer than most would have thought.

Intel could not let AMD have all the fun in the 64 bit X86 compatible world, and peddled hard to catch back up. They are having fun at AMD’s expense these days, but I never count AMD out. They were not only the first to have the 64 bit X86 market, they had all the cool virtualization assists first.

Meanwhile IBM opened up itself to all sorts of speculation by PAYING Global Foundries to take its Fab business: Please. I guess the gaming platforms moving away from Power just hurt too much. Those were the days.

That leaves us with three chip architectures left for your future Data Center:

Plus one new one:

  • ARM

Death by 1000 Cuts

Yes: Itanium is still around. May be for a while. If you have a Tandem / HP NonStop, then you have these for now. Until HP finally moves them to AMD64. If they want feature / speed parity with what going on in the rest of the world, they’ll have to do something like that.

The VMS Operating System problem was solved by porting it to AMD64 via VMS Software, Inc. And HP-UX (my first UNIX OS) seems to be slowing turning into Linux customers on, you guessed it, AMD64 chips. HP is a big player in Linux space, so that makes sense. HP-UX 11i v3 keeps getting updated, but the release cadence relative to the industry, especially Linux, look like it is meant to be on hold rather than ported away from Itanium. Lets face it, if you have to Ahtyba Rubin Jersey href=”http://www.businessinsider.com/hp-shows-off-new-itanium-servers-2012-11″ target=”_blank”>sue someone to support you, your platform probably has larger issues to deal with. Not trying to be snarky there either. Microsoft and Red Hat Linux dropped their support for the chip. Server Watch says that its all over too. So does PC World.

 

Linux runs on everything so if Ahtyba Rubin Authentic Jersey Linux doesn’t run on your chip… Just saying here that you probably do not have to think about where in your DC to put that brand new Itanium based computer.

What does all this mean for What’s Next?

There are few obvious outcomes to all this line of thinking. One is that the operating systems of the next decade are fewer. Next is that Operating systems themselves are going Ahtyba Rubin Womens Jersey to hide. Really: As much as I love Linux, no one in the marketing department cares what OS their application is running on / under. It’s a hard thing for a computer person to see sometimes but that change that mobile and DC consolidation and outsourcing (sometimes called “Cloud Computing”) is that the application itself is king. It’s their world and our data centers are just the big central place that they run in.

Clearly Linux and MS Windows are in upward trajectories. Every major player such as IBM, HP, Oracle, etc. etc. supports those two.

The Sparc  / Solaris and Power / AIX applications are still alive and kicking. With spinning off its X86 Server business to the same folks that bought their laptops, IBM is left with only high end servers (I Series is technically called midrange) (Oh, and Lenovo made that laptop business work out pretty well for themselves). IBM wants to be in the DC, where the margin is. Same thing more or less at Sun/Oracle. All their server hardware is being focused on making their core product run faster.

HP will be in the AMD64 or ARM world, and that’s pretty interesting. The Moonshot product is nothing I have personally been able to play with, but it makes all kinds of sense. If you don’t need massive CPU horsepower, you can do some pretty nice appliance-like things here.  And since applications are king, not what hardware it runs on, the opportunity is to have lots of little application units in a grid that are easy to Ahtyba Rubin Youth Jersey just swap when they fail has a very Internet like flavor to it.

water cooled

 

How will Santa Package all our new Ahtyba Rubin Kids Jersey Toys?

Looking at Moonshot, and all the various CPU’s, it seems that, for a while at least, we’ll be seeing CPU’s inserted into sockets or Ball Grid Arrays (Surface mounted). Apple has certainly proved with the Air line that soldered to the mainboard solves lots of packaging problems. Till the chips get thicker, and start having water cooling pipes running through them because air just can’t pull heat away the way that water can.

Yep: Liquid in the data center (spill cleanup on aisle three). We can be as clever about the packaging as we like, but physics rules here, and to keep trying to make these faster / better / cheaper is going to mean a return to hotter more than likely. That’s a real problem in a blade chassis.  Even if the water is closed loop and self-contained to the airflow of the RAM / CPU air path, it means taller. Wider.

Or, you go the other way, and just do slower but more. Like hundreds of Mac Mini’s stacked wide a deep, or this little slivers of mobos from Airs ranked thirty across and four deep on every tray / shelf. You wouldn’t replace the CPU anymore. The entire board assembly with CPU and RAM would become the service unit. Maybe everything fits into the drawer the same way that disk vendors do it now.

When I designed our most recent data center, it was extremely hard to stay inside the 24 inch / 600 mm rack width. By going taller (48U) I could put more servers in one rack. Which meant more power and wiring to have to keep neatly dressed off to the side, in a rack that had little side room. The Network racks are all 750 mm for that exact reason.

If we go uber-dense on the packaging because of the CPU design limits, then what does that mean about the cabling? Converge the infrastructure all you like, the data paths to that density are going to grow, and 40Gb and 100 GB Ethernet don’t actually travel in the Ether. I know!

That conversation is for another post though.