Blog

A Data Center Migration Success Story

The Challenge

Houston based Cabot Oil & Gas, a publicly traded company (COG) that produced nearly 2 billion in revenue this past year and one of the Fortune Top 100 fastest growing companies for Amara Darboh Jersey 2014 recently turned to Kiamesha Global for assistance related to their core production data center located within their headquarters building.  With Cabot’s rapid growth, the need for higher resiliency for its core business applications has increased.  Housing critical infrastructure running these applications in a high quality data center facility designed to meet high uptime requirements became a necessity.

The Solution

To address the data center requirement, advisors from Kiamesha Global provided the Cabot team with the following Advisory and Agency services:

  • Total Cost of Ownership Analysis of In-House Data Center Operations
  • Market Intelligence on Houston Area Data Center Colocation Providers
  • Sizing of Wide Area Network Services Amara Darboh Authentic Jersey Required to Operate the Data Center Off-Site
  • Procurement Management of Data Center Colocation and Network Services
  • Technical & Physical Migration Support

The Results

Norbert Burch, Technical Manager for Cabot was pleased with the results of the project.

“Kiamesha Global was instrumental in Amara Darboh Womens Jersey assisting Cabot in our recent datacenter move.  We had a four month window of opportunity to make the move.  They helped us do a cost analysis, select the best fit datacenter, negotiate contracts Amara Darboh Youth Jersey and put together an excellent team all in a very short amount of time.  The datacenter move was successfully completed within the planned outage window.  We have been running at the new datacenter for almost 3 months.  We are very satisfied with the whole project and look forward to doing business with Kiamesha Global in Amara Darboh Kids Jersey the future.”

 

 

 

The Cores of What’s Next

This article was written by Steve Carl, Sr. Manager, Global Data Centers, Server Provisioning at BMC Software.

In my last Green IT post I looked at the Green / Power side of CPUs and Cores. Here I want to open that up, and have a look around.

Framing this thought experiment is the idea that we are running out of road with Moore’s Observation.

What the Observation Really Is

It is worth noting here that what Moore observed was not that things would go twice as fast every two years or that things would cost half as much every two years. That sort of happened as a side effect, but the real nut of it was that that the number of transistors in an integrated circuit doubles approximately every two years.

Just because the transistors doubled does not mean its twice as fast. Not any more than a 1 Ghz chip from one place is half as fast as a 2 Ghz chip from a different place, because it all depends. Double the transistors only means it is twice as complex. Probably twice as big, if the fab size stays the same.

Since the Observation was made in 1965, doubling what an IC had back then was not the same order of magnitude as doubling it now. IBM’s Power 7, which came out in 2010 has 1.2 Billion transistors. It is made using 45 nanometer lithography. Three years on, the Power 8 is using 22 Nanometer lithography and the 12 core version has 4.2 billion transistors.

To stay on that arc, the Power 9 would have to be on 11 nanometer lithography, and have over eight billion transistors. However, from what I have read, IBM and Intel’s next step down for server processors is 14 nanometer, not 11.  It may not seem like a big difference, but when you are talking about billionths of a meter, you are talking about creating and manipulating things the size of a SMALL virus. We are in the wavelength of X-Rays here.

A silicon atom is about .2 nanometers. We are not too many halve-ings away from trying to build pathways the size of 1 atom wide, and quantum mechanics is a real bear to deal with at that scale. Personally, I don’t even try.

So we’ll do other things. We’ll start making them taller, with more layers. The die will get bigger. To get more cores in a socket will mean the socket will get physically larger… up to a point. That point is the balance between heat removal at the atomic scale and power. Seen a heat sink on a 220 watt socket lately? They are huge.

moores-law

The Design, the Cost, the Chips to Fall

Ok. So making chips is going to get harder. Who can afford to invest the time and effort to build the tooling and the process to make these tiny, hot little things?

Over the last 10 or 15 years we have watched the vendors fall. After kicking Intel’s tush around the X86 market place by creating the AMD64 chips, and thereby dooming the Itanium, AMD ended up divesting themselves of their chip fabrication plants and created Global Foundries in the process.

Before that, HP had decided it was not anything they wanted to be doing anymore, and made plans to dump the Alpha they had acquired from Digital via Compaq. They also decided to stop making the PA RISC line, and instead migrate to the doomed Itanium. To be fair, they didn’t know what AMD was going to do to that design. But there is a reason the Itanium’s nickname was the Itanic, and actually it has lasted a while longer than most would have thought.

Intel could not let AMD have all the fun in the 64 bit X86 compatible world, and peddled hard to catch back up. They are having fun at AMD’s expense these days, but I never count AMD out. They were not only the first to have the 64 bit X86 market, they had all the cool virtualization assists first.

Meanwhile IBM opened up itself to all sorts of speculation by PAYING Global Foundries to take its Fab business: Please. I guess the gaming platforms moving away from Power just hurt too much. Those were the days.

That leaves us with three chip architectures left for your future Data Center:

Plus one new one:

  • ARM

Death by 1000 Cuts

Yes: Itanium is still around. May be for a while. If you have a Tandem / HP NonStop, then you have these for now. Until HP finally moves them to AMD64. If they want feature / speed parity with what going on in the rest of the world, they’ll have to do something like that.

The VMS Operating System problem was solved by porting it to AMD64 via VMS Software, Inc. And HP-UX (my first UNIX OS) seems to be slowing turning into Linux customers on, you guessed it, AMD64 chips. HP is a big player in Linux space, so that makes sense. HP-UX 11i v3 keeps getting updated, but the release cadence relative to the industry, especially Linux, look like it is meant to be on hold rather than ported away from Itanium. Lets face it, if you have to Ahtyba Rubin Jersey href=”http://www.businessinsider.com/hp-shows-off-new-itanium-servers-2012-11″ target=”_blank”>sue someone to support you, your platform probably has larger issues to deal with. Not trying to be snarky there either. Microsoft and Red Hat Linux dropped their support for the chip. Server Watch says that its all over too. So does PC World.

 

Linux runs on everything so if Ahtyba Rubin Authentic Jersey Linux doesn’t run on your chip… Just saying here that you probably do not have to think about where in your DC to put that brand new Itanium based computer.

What does all this mean for What’s Next?

There are few obvious outcomes to all this line of thinking. One is that the operating systems of the next decade are fewer. Next is that Operating systems themselves are going Ahtyba Rubin Womens Jersey to hide. Really: As much as I love Linux, no one in the marketing department cares what OS their application is running on / under. It’s a hard thing for a computer person to see sometimes but that change that mobile and DC consolidation and outsourcing (sometimes called “Cloud Computing”) is that the application itself is king. It’s their world and our data centers are just the big central place that they run in.

Clearly Linux and MS Windows are in upward trajectories. Every major player such as IBM, HP, Oracle, etc. etc. supports those two.

The Sparc  / Solaris and Power / AIX applications are still alive and kicking. With spinning off its X86 Server business to the same folks that bought their laptops, IBM is left with only high end servers (I Series is technically called midrange) (Oh, and Lenovo made that laptop business work out pretty well for themselves). IBM wants to be in the DC, where the margin is. Same thing more or less at Sun/Oracle. All their server hardware is being focused on making their core product run faster.

HP will be in the AMD64 or ARM world, and that’s pretty interesting. The Moonshot product is nothing I have personally been able to play with, but it makes all kinds of sense. If you don’t need massive CPU horsepower, you can do some pretty nice appliance-like things here.  And since applications are king, not what hardware it runs on, the opportunity is to have lots of little application units in a grid that are easy to Ahtyba Rubin Youth Jersey just swap when they fail has a very Internet like flavor to it.

water cooled

 

How will Santa Package all our new Ahtyba Rubin Kids Jersey Toys?

Looking at Moonshot, and all the various CPU’s, it seems that, for a while at least, we’ll be seeing CPU’s inserted into sockets or Ball Grid Arrays (Surface mounted). Apple has certainly proved with the Air line that soldered to the mainboard solves lots of packaging problems. Till the chips get thicker, and start having water cooling pipes running through them because air just can’t pull heat away the way that water can.

Yep: Liquid in the data center (spill cleanup on aisle three). We can be as clever about the packaging as we like, but physics rules here, and to keep trying to make these faster / better / cheaper is going to mean a return to hotter more than likely. That’s a real problem in a blade chassis.  Even if the water is closed loop and self-contained to the airflow of the RAM / CPU air path, it means taller. Wider.

Or, you go the other way, and just do slower but more. Like hundreds of Mac Mini’s stacked wide a deep, or this little slivers of mobos from Airs ranked thirty across and four deep on every tray / shelf. You wouldn’t replace the CPU anymore. The entire board assembly with CPU and RAM would become the service unit. Maybe everything fits into the drawer the same way that disk vendors do it now.

When I designed our most recent data center, it was extremely hard to stay inside the 24 inch / 600 mm rack width. By going taller (48U) I could put more servers in one rack. Which meant more power and wiring to have to keep neatly dressed off to the side, in a rack that had little side room. The Network racks are all 750 mm for that exact reason.

If we go uber-dense on the packaging because of the CPU design limits, then what does that mean about the cabling? Converge the infrastructure all you like, the data paths to that density are going to grow, and 40Gb and 100 GB Ethernet don’t actually travel in the Ether. I know!

That conversation is for another post though.

Data Center Wars – Austin Rising

Austin, Texas once thought of primarily as a government and university town is quickly becoming an integral part of the Texas economic landscape. While still overshadowed by the sheer size of the Houston and Dallas/Fort Worth economies, Austin has emerged as a major technology hub that has attracted many leading technology firms to house significant operations in the Greater Austin Area.

In recent years major investments have been made by several data center colocation firms which has drastically increased the overall supply and options within the Austin market. The heaviest concentration of new facilities has been in and around the MetCenter, just a few miles southeast of downtown Austin.
What is the MetCenter?

The MetCenter is a 550 acre development with many ideal amenities for large, critical data center operations. It is conveniently located near the intersection of two major highways (State Highway 71 & U.S. Highway 183) and has ample availability of redundant utilities and telecommunication services. Due to the attractiveness of the development and the overall growth of demand for data center services in the Austin area, several data center facility firms have all launched brand new data center campuses all located in or within walking distance from the MetCenter.

The Home Teams

Headquartered in Austin, Data Foundry has deep roots in the telecom industry initially starting out in 1994 as Texas.net, one of the first 50 internet service providers in the United States. Data Foundry has had major data center facility operations in Austin since 2003 when it opened its first site in the MetCenter area. It launched what is now its flagship data center in 2011 called“Texas 1”, just a short drive from the original site. This attractive 250,000 square foot facility sits on a 40 acre “ranch” and is designed and decorated with some tasteful, local Texas flair. The Texas 1 Facility is purpose built to accommodate the latest high-density, critical infrastructure needs. Data Foundry has proven successful filling this facility and has expansion plans in the works.

OnRamp, also founded in 1994 as an early internet service provider, kicked off its first purpose built data center operation in Austin in 1998 less than 2 miles from the MetCenter. In 2014 it joined the new Austin data center capacity race by launching a new, modern facility a short walk from Data Foundry’s Texas 1. While OnRamp specializes in colocation services, it also offers a very hands-on service in supporting customers that require compliant, managed solutions as well. With additional out of state managed offerings in Raleigh, South Carolina, OnRamp is able to provide full managed and hybrid solutions for those looking for more than just high grade colocation services.

The Visitors

CyrusOne has its original roots in Austin, where founder David Ferdman assembled the original team but did not build its first facility there until 2009, nearly a decade after launching its first site in Houston. Now a public company (CONE) headquartered in the Dallas/Fort Worth area after relocating from Houston, CyrusOne has established itself as a major brand name throughout the state. Competition heated up when the decision was made to place its first site right next door to Data Foundry’s original 2003 site. Just two short years later, on the back of local demand and demand from its customers in other markets needing secondary sites, it built “Austin II” just around the corner within the MetCenter, the very same year Data Foundry launched Texas 1.

San Francisco based Digital Realty is the big kid on the block being a major public company (DLR) with a market cap over $9 billion. Digital made quite a splash in the Austin market in May of 2013 with their $31.9 million acquisition of a six building portfolio of properties in Austin’s Met Center Business Park. Two of the six buildings, totaling approximately 100,000 square feet, are operating data centers under lease to Data Foundry and CyrusOne. The six buildings are located adjacent to Digital Realty’s data center in the MetCenter. Nearly half of Digital’s 75,000 square foot facility is occupied by a single client with the other half consisting of two of Digitals standard 10K square foot “Turn-Key” data center suites.
With Digital’s nascent entry into the world of colocation, staffing and support of the facility are minimalistic, as one would expect from a wholesale real estate/data center company. Clients will, however, still benefit from Digital’s world class operational procedures and Tier 3 design features. For enterprise organizations with large data center foot prints and a global presence, Digital could be an ideal fit.

The Game

The growing competition in the Austin area has provided numerous benefits to end users. An ever increasing number of high quality facility options along with more specialized managed service offerings have put on the table a broad array of choices to fit the needs of consumers. Start-ups and smaller independent organizations benefit from the availability of managed hosting and public/private cloud access while larger enterprise type organizations can leverage their own IT personnel in a more traditional colocation environment. Additionally, the entry of new players has resulted in more aggressive pricing across the board.

About the Authors:

Todd Smith and Kevin Knight specialize in the data center facility market working for the technology advisory firm Kiamesha Global (www.kiameshaglobal.com). If your organization is considering the potential benefits of a data center relocation, expansion or simply want to better understand your options in the data center marketplace, Todd and Kevin can be contacted via e-mail at tsmith@kiameshaglobal.com and kknight@kiameshaglobal.com. Even if you have no changes under consideration for the New Year, they would welcome the opportunity to provide you with an assessment of the current market value of your existing data center portfolio’s in-house and/or collocated facility assets in order for you to better recognize where you stand in this rapidly evolving market.

The Future of IT Infrastructure – Where Do We Go From Here?

This article was written by Steve Carl, Sr. Manager, Global Data Centers, Server Provisioning at BMC Software

 

In a series of articles I did in my Green IT blog at BMC, I documented the design we had chosen to consolidate not just our Data Centers, but all the server platforms therein. Collectively I called it “Go Big to Get Small

The thing I have been thinking about since then is “Whats next”? Two years ago when we set out to do this, we had to research, pick a design and commit to it, and to some degree hope that the technology stack we went with for each platform would be viable all during the course of the first phase of the project. Not that we could not adjust along that way, but standards simplify such complex, long term projects such as taking two data centers at over 40,000 square feet and making them one DC at 2,500 square feet. The fewer variables and moving parts, the easier it is.

At Dell World last week I heard someone talking about “Moore’s Law”.  They were blithely putting it out there as if it was the same thing  as a law of nature.  Its not of course, and it has not been served well by having the suffix of ‘Law’ attached to it. It is not the second law of thermodynamics. Its an observation of a trend.

A trend that is abating.
[subtitle3] Moore [/subtitle3]

Here is an example: two years ago we set the standard X86 computing node for virtualization as the Dell M620 blade, with 2 sockets, 16 cores, and 256 GB of RAM. That would, according to our performance and capacity planning tool, be on average a node that would run about 50 VMs of our average VM type. The CPU would average about 50% or so, and the RAM about 80% or so. Given how big the clusters of nodes were, that was extremely well performing, and also highly fault tolerant.

Two years later I ran the numbers and it turned out that exactly same configuration was STILL the sweet spot in price / performance. I was SURE that the answer would say that now would be a good time to move to higher core counts and denser memory. I was only going for a 50% increase in RAM, and a few more cores: 384 GB and 20 cores to be specific. It was doable, but the price was MORE than 50% higher, so it made no sense. I counted power, space, heat load, and the whole enchilada.  It may be different now with the new M630 being out: I have not re-run the number yet for it.

For the purposes of this post however, the main point is that Moore’s Observation has a finite end date to it. It will decelerate. Every turn of the technical crank will end up getting more expensive at some point.

We’ll still be able to shrink our Data Centers and use less power, etc, for a while yet. There are too many DC’s running around with ancient gear in them for that not to be true.
[subtitle3] Store [/subtitle3]

Its not just about the CPU or server side of it either. Physical, spinning disks (what we used to call the ’round and brown’, even though they are all silver these days as far as I know) are reaching what they can do with Areal Density. It was never on the same trajectory as Moore’s Observation, with this article noting a doubling in 5 years, not 18 months.

Most folks think spinning media is doomed soon anyway. Flash memory will sooner or later kill it for both power and density reasons.

Either way, same ultimate issue as a CPU then: Quantum mechanics are going to limit how small something can get. Once that happens, there is also the limit for power reductions and DC size reductions, not counting playing games with only powering up what’s in use, possible packaging innovations, etc.

 

[subtitle3] Virtualization [/subtitle3]

Virtualization has been a free ride for nearly 1.5 decades in the AMD64/X86 space, and since the early 1970?s on the mainframe. It may have fairly low overhead these days, but its ability to help consolidate starts to dry up once you move the average CPU / memory utilization up near the maximum possible. We had data centers FULL of barely used computers, with the average around something like 3% in our shop (much lower than the 10-15% often quoted out in the trades. Being an R&D shop, we had reasons why our servers were both sprawled, and lower utilization)

Once you have driven those numbers into the 80?s and 90?s you are starting to be out of road there though. The next great hope is technology like LXC / Docker / et al. Re-virtualizing the same OS over and over will eventually get old, and it will give way to application virtualization.. And then you’ll be at the wall again.

 

[subtitle3] Capacity Planning [/subtitle3]

The role of capacity planning just gets more and more central as the bottleneck game is played. Remove one barrier to full utilization and the next one crops up, until you are at full utilization all the time. This is not a new idea to anyone with a mainframe background. I once knew a mainframe capacity planner that would predict things like “On March 29th of next year, we’ll hit 100% average workload all during prime shift”, and be eerily accurate.

 

[subtitle3] When? [/subtitle3]

That all of this will come to pass I have little doubt. The issue is when. I think about such things now, because as I design new data centers, and read about DC technology, it is getting more and more esoteric. When will hot / cold aisle be out of the game, and liquid cooled racks be required? Long enough from now that I can avoid that under-floor infrastructure cost now?, or just assume I can move to something more modern every time I hit the next bottleneck? What does that mean about how I might capitalize a DC build out? Is it still a 10 year investment? We used to do these for 30 years. No way we’d do that now.

 

GUEST BLOGGER ALERT

This article was written by Steve Carl, Sr. Manager, Global Data Centers, Server Provisioning at BMC Software carl-scott of BMC

See Carl’s other blog articles

“Adventures in Linux” : http://communities.bmc.com/communities/blogs/linux/
“Green IT” : http://communities.bmc.com/communities/blogs/green-it/

 

Data Center Wars – Battleground Houston

Houston, Texas is being described by many as an absolute boom town with substantial growth across many verticals.  Well known for being a global energy sector capital it has become a resilient and well-diversified economy with twenty-six Fortune 500 companies making their home in the Houston area.

Many data center firms have taken advantage of this growth by building significant capacity in the market. Well established public data center companies such as CyrusOne (CONE), Digital Realty Trust (DLR) and Internap (INAP) have all made recent investments in putting additional infrastructure in Houston to keep up with the high demand.  CyrusOne expansions are currently underway in Northwest Houston, Digital in the Northside Greenspoint Area and Internap in Downtown Houston.  Traditional real estate firms have also eyed the market in recent years with Stream Realty building a major wholesale campus in The Woodlands area of Greater Houston in 2012 with much of it already under lease.

While Houston has previously been a market with few data center players relative to other major metropolitan areas new firms have entered and are making major investments often with new, innovative approaches.  With intense competition rapidly on the rise, a local data center war has heated up which is giving customers very attractive options for local colocation service.

 

 

[subtitle3] Westside Providers Building to Serve Big Footprints [/subtitle3]

Katy, TX based StratITsphere recently made the decision to exit the retail colocation business by entering into strategic partnerships with customers like Alpheus and Scale Matrix in order to focus on its own wholesale service model.

StratITsphere CEO Darin Cook shares his thoughts on what he sees in the Houston market.

“In general, I believe there to be a shortage of supply within the Houston data center market place even with all of the recent builds.  Most of the competitors in Houston are disciplined and, to this point, have stayed focused on core retail or wholesale products.  Houston is beginning to mature with new providers moving into the market with both retail and wholesale products.  As new competitors come to Houston, the boundaries of wholesale versus retail will remain as a majority of the opportunities in Houston are mid-market enterprise companies that traditional wholesale focused providers shy away from.”

Dallas based Skybox Datacenters has a sizeable 20 acre campus and 96,000 SF facility under construction on the west side of Greater Houston providing dedicated, 1.2 MW private data halls for wholesale customers.  This is their first facility in the local market and they are looking to make a big splash with a focus on large footprints typically operated by tech savvy major enterprises and large technology service providers.

Rob Morris, Managing Partner of Skybox provides us insight into the rationale for this major capital investment.

“We have made this significant investment in Houston because we observed a lack of true wholesale data center space over the past several years.  While very common in most primary markets, it has yet to really make its debut in Houston.  Larger customers typically opt for wholesale data center leases for the cost, control and flexibility they provide their operations.  Wholesale should not just be about getting a better per unit price because you are purchasing more.  The real wholesale value is in receiving complete transparency in operations, complete pass through power and operating expenses as well as long-term operational flexibilities that simply can’t be effectively provided in traditional colocation.

We are not everything to everyone and we won’t try to be.  We want to make sure we are providing the absolute best solution to each of our customers.  For large, mission critical, enterprise grade requirements we are seeing a very positive response to this new offering.”

 

[subtitle3] High Touch Northside Centers [/subtitle3]

The Westland Bunker, with its data center campus located in the far north of the Houston area in Conroe, Texas is well known for its highly resilient underground facility which was converted from an expansive nuclear shelter that sits 345 feet above sea level.  This facility has provided production, disaster recovery and business continuity services to organizations seeking high security as well as safety from natural disasters.

Due to the high Houston area demand an additional above ground expansion is underway at the site to bring additional capacity of 50,000 high density engineered square footage in the initial stages of the build plan capable of servicing customers looking for private suites, cages or individual locked cabinets.

Westland Bunker Vice President of Sales Brock Nieves is optimistic about the overall landscape of Houston colocation.

“We have seen substantial demand for growth within our existing customer base while new demand has clearly been on the rise.  The appetite for outsourced data center services has brought new opportunity for us as well and other providers in the area.

We are experienced in handling large requirements and customers that could be considered wholesale but we differentiate ourselves by providing a high level of personal 24/7 service to all of our customers in helping them achieve their specific business goals.  Regardless of whether they have a full private suite or a single cabinet it is our goal to do everything possible to assist them in growing with us on our campus.  Our new build is a commitment to making sure we have an easy path forward for them to do so.”

Scheduled for completion in the first quarter of 2015, Austin-based Data Foundry’s “Houston 2” will be its largest greenfield data center situated on 18 acres in North Houston.  The carrier neutral facility is being designed with fully redundant power and cooling infrastructure capable of servicing high density, mission-critical footprints.

Data Foundry CTO Edward Henigin has been with Data Foundry since the very beginning as employee #1 when they were known as Texas.Net, one of the first 50 ISP’s in the United States.  He has definitely seen a lot of market change in Houston over the past 20 years.

“We are making a significant investment in Houston to take advantage of the rapidly increasing local demand for data center services within a variety of verticals.

For Data Foundry it is all about the finished product not just the backend data center infrastructure.  Our hands-on service takes away all of the headaches of facility management not just the financial ones.  We put significant resources in place in the form of facility and customer service staff to provide tailored services to customer requirements that have commonality across the customer base.  We see that many Houston organizations are eager to take advantage of this style data center service offering.”

Fibertown is another data center firm that believes in the hands-on, high-touch service approach.   They are well known for providing Houston companies with disaster recovery data center and business continuity services out of their Bryan, TX data center.

Fibertown Vice President of Sales, Craig McClusky talks about their decision to build closer to the center of Greater Houston which is located in the Greenspoint area of North Houston.

“Houston companies are seeing a positive investment in outsourcing data center services to experts who can become an extension of their IT team. This has spurred a rise in demand for high performance, secure data centers close to the office. The robust Houston economy and thriving oil & gas and health care industries will continue to create demand for colocation among the mid-market which is our core focus.

Our customers tell us that they appreciate our passion, accountability and approach to personalized customer service. Our customers with large data requirements and production systems needed a facility closer to home, and we delivered by opening a Houston data center.

At Fibertown, we are delighted to see a vibrant and expanding market for colocation.  Lots of companies are making the decision to move to much more secure locations for their computing needs.  Security and uptime are industry givens but the differentiator can easily be the NOC and the human touch points.”

 

[subtitle3] The Future of the Houston Market [/subtitle3]

With new data center campuses popping up all over the Houston area it is indeed an exciting time for the local colocation market.   It is estimated that well over 50% of Houston based companies still operate their core production data centers in-house.  These companies evaluating moving into colocation facilities now have many options to choose from at rates that have typically fallen from where they were in recent times due to lower building expenses per kW and increased competition.  Those that have already moved but are unhappy with their current provider now have multiple avenues to relocate.

In the Data Center Battleground of Houston there appear to be many winners and few losers.  Houston companies looking for data centers will surely continue to win as this healthy competition continues.

 

 

Data Center Wars – Retail vs. Wholesale

An all-out data center war has ensued in many maturing marketplaces throughout the world between variations of data center providers.  While many data  center firms continue to grow and thrive during these battles the consumer is primed to gain the most as we head toward commoditization of data center infrastructure.

[subtitle3] How did this war get started?  [/subtitle3]

The lines between wholesale and retail were once pretty clear with 1.2 megawatt’s of dedicated power infrastructure in a private hall or building being the starting point for a typical wholesale offering.  It was not long ago that retail colocation providers found prosperous working relationships by leasing wholesale space and carving it up amongst smaller users.  Some of these same retail providers began to move upstream as they found larger opportunities they used as anchor tenants for new campuses and began to compete directly against the wholesale providers.  Essentially at the same time many of the wholesalers beginning to lose opportunities to smaller, more service intensive providers became enticed by the rich margins being garnered in the retail space and began to move well below the 1.2 mW wholesale mark.  Early methods of moving downstream came when 1.2 MW “pods” or “suites” were subdivided into what are commonly known as “PDU breaks”.  Each PDU break was approximately one quarter of a pod or 300 kW of available power delivered in approximately 2,500 sq. ft. of caged data center space.

 

[subtitle3] Who has the advantage in this war? [/subtitle3]

Data center providers with a retail colocation service background have more experience in managing large amounts of customers on shared infrastructure.  The retail model is geared to be an “all-in” service with the provider handling everything from power installations to running cross connections.  To succeed these companies were required to have strong sales and marketing cultures that developed brand recognition in the geographic markets they operate in.  They have advantages when a customer needs assurance of service support and the confidence of a recognized name they have heard of.

Data center providers with a wholesale service background tend to have cultures based primarily on high finance and real estate.  They are used to dealing with large customers that are difficult to conduct transactions with from a commercial and legal standpoint.  By the nature of wholesale they did not need to have everyone know their name – just the set of partners, typically real estate brokers, and customers that swam in the very large infrastructure space.  In addition they have little experience and understanding of managing and servicing many customers within data center environments.

With the lines between retail and wholesale blurred – those with the retail backgrounds that have been able to structure themselves in a manner that gives them access to large amounts of low cost capital have the advantage.  They are able to now play both games effectively while the wholesaler must play catchup in learning how to provide effective, scalable multi-tenant service.

On the other hand – the retail providers that have not been able to get themselves in a position to deploy large amounts of inexpensive capital are in a tough spot.  The survival of these companies will depend on how quickly they can catchup in the capital game or their ability to adapt from colocation to a broader range of high value services which will give them an advantage for customers looking for more holistic service packages.

Both retail and wholesale providers also face the prospect of being an acquisition target if they are unable to adapt to the changing landscape.

 

[subtitle3] What are consumers gaining from this war?[/subtitle3]

In short customers are getting a better technical product at lower prices.  Lots of capital both public and private has been pouring into the major and mid-range data center markets all over the world often via highly efficient, modular style builds which allow the providers to build at a much lower cost per kilowatt than ever before.

While this may result in lower prices, customers shouldn’t allow it to lead to reduced or scaled back service.  In order to ensure top tier support you will need to be mindful that your Service Level Agreements reflect your expectations of service.  Additionally, it is very important to have an understanding of data center staffing and operational procedures as these may affect the overall level of service.

 

[subtitle3] What does the future hold? [/subtitle3]

It is always difficult to accurately predict the future – especially in the technology sector where disruptive inventions lurk around the corners.  How variations of cloud service impact the data center industry at the wholesale and retail levels is in the process of playing out.  The ability to move data and workloads more seamlessly at greatly reduced costs is already upon us.  The only thing to be sure of is that there is plenty of money to be made – and lost – as well as great efficiencies and productivity to be gained.

Can’t buy me love – Are you sure a vendor is motivated to provide you good service?

KG-Newsletter-121It is an easy trap to fall into – believing that your business and the money it brings is enough to get a vendor to provide the world class service you may need for a critical business requirement.  There may even be a highly motivated sales person doing all possible to convince you that your business is in the right hands.  How can you be sure though that your business is indeed a fit for a vendor and that the working relationship will be a positive one for your organization?

 

 

 

[subtitle3]Can Quality of Service be Quantified?[/subtitle3]

Technical capability and cost are items that are typically straightforward, easy to prove and create “apple-to-apple” comparisons between vendors.  Quality of service on the other hand is much harder to pin down.  Internal figures provided by the vendor can Josh Jones Authentic Jersey be easily skewed or flat out rigged and even references can be solidly tainted.  External market figures from trusted sources are available in some cases, but are they comparable to your requirements?  It can truly be a daunting task to get to the bottom of this important question, one that frankly can’t be fully quantified.  Efforts in uncovering enough truth to make a valid decision is however warranted especially when the consequences of a vendor selection mistake are high and not fast or easy to correct.

 

[subtitle3]Critical Points to Consider[/subtitle3]

Account Size – What is the size of their typical account in terms of spend?  Is your organization or requirement one that fits that mold?  What end of the scale do you fall on?  If on the small side you may be insignificant but Josh Jones Jersey if too big they may not be experienced enough.

Service Type – Is the service you are looking to buy a core part of their business?  What percentage of their total revenue is comprised by this service type?  Do they have a long term commitment to provide this type of service to the market?

Organizational Culture – Is this service part of the foundation the company is built on?  Do they have a vision for the provision of this service?

Service Level Agreements – Is the vendor willing to work with you to put tangible consequences to back their service claims?

Up-Selling Culture – Does the vendor offer a multitude of services in which you may be pressured to buy more of after the initial sale?

Contract Renewals – Are you protected from severe price increases?  Is Josh Jones Kids Jersey there Josh Jones Womens Jersey a chance they could simply non-renew your contract?

References – don’t just take the ones they give you out of a can.  Ask questions about customers that are Josh Jones Youth Jersey very similar to you and your requirement.  Spending time talking with those customers will be valuable.

Their overall plan – is it likely this company will be purchased and absorbed by another company in the foreseeable future?  Does the company have solid financial standing to assure staying power?  How much are they willing to disclose about their overall plans and financial situation?

 

[subtitle3]Case Study[/subtitle3]

Several years ago a major telecommunications company had a high caliber colocation facility in the Texas market.  They had a stellar uptime record and service reputation with their customers at this site.  A period of time after they launched the site and with many customers installed a major energy company reserved all the remaining space and power at the location.  The telecommunications company then turned away additional orders from current customers giving them no guidance on future availability of capacity or even any tangible alternative solutions.  Customers who had never had a single notable issue now were in panic mode.  They were on fixed contracts yet now needing to put additional IT load in other locations causing operational headaches, additional costs and potential loss of revenue.

What happened here?  Why were these previously happy customers put into this situation?

Colocation was a tiny part of this telecommunication company’s total business.  Spending capital to add additional capacity to some colocation site in Texas was not a high priority regardless of the heartburn it may cause a very limited amount of their total customer base.

 

[subtitle3]Relationships Matter[/subtitle3]

In the end relationships are absolutely paramount in business and the importance of quality vendor relations is something we have highlighted before (The Perilous Waters of Contentious Vendor Relationships).  Glossing over service quality and over-analyzing technical capability and a sticker price is truly a dangerous game for any service deemed business critical.  You quite often simply can’t buy the service you are looking for unless the vendor is truly motivated at the core of the organization to provide it.  Making sure you develop the right relationships with those organizations that are driven to do so will serve you well.

Big Money Decisions – Avoiding Colocation Contract Problems

She was probably the sharpest, most experienced, Purchasing Manager we had ever negotiated with.   After taking the initiative to write the company’s Master Service Agreement, incorporating all of their business requirements and operating procedures, she was prepared to force any vendor to use it if they wanted to do business.   Fortunately for us once we conducted a full review of her MSA, she agreed that it was missing a key piece of information that would have  wiped out any SLA’s that we were offering.

Even with the most competent decision makers there are pitfalls and traps (some accidental, some purposeful) in contract negotiations that even they may not money-on-the-tablehave encountered previously.    With the continued trend towards using an outsourced data center, it has become even more important that end users become more knowledgeable on this subject.   A recent study by The Uptime Institute showed that while 63% of third party data center operators reported a “large budget increase” for 2013, only 25% of enterprise data center operators reported a similar “large budget increase.”  This disparity should only increase over time as the shift away from on premise facilities shows no sign of slowing.

The following are several key decision making criteria that need to be part of a data center contract review process prior to acquiring new data center service.  While not a comprehensive list, this should help to highlight some areas of chief concern.

[subtitle3] Service Level Agreements [/subtitle3]

All SLA’s are not created equal.   While they may provide for 100% uptime, the most important factor is the amount of credit that you will receive in the event of an outage.  Credits typically fall into one of 3 categories; temperature, humidity and power, each with its own criteria. The amount of each credit issued can range from one day of credit per outage to a pro-rata share of the daily expense.  Outage credits are sometimes capped at anywhere from 5 days to 1 month, so review this carefully.    The enterprise should also be familiar with its options for termination after repeated outages and if those outages need to be of certain duration to qualify.

[subtitle3] Pricing Models [/subtitle3]

It can be difficult to accurately compare pricing between different providers due to the fact that in many situations you find that the pricing models used vary from provider to provider.   Pricing models used by data center providers typically fall into one of three categories; Full Service Gross (FSG), Modified Gross (MG) and Triple Net (NNN).   Here is a brief comparison of the advantages and disadvantages of each of the models.

 

[subtitle] Full Service Gross (FSG) [/subtitle] The provider takes care of all operating expenses such as building insurance, real estate taxes, security, etc.

BenefitsEasy to pay number, customer fixes their costs.
DisadvantagesBase rent looks high because all costs are included.  The provider takes on inflation risk unless annual increases are included.

[subtitle] Triple Net (NNN) [/subtitle] The provider passes through all operating expenses such as building insurance, real estate taxes, security, etc. to the customer based on their pro-rata share.

BenefitsProvider reduces inflation risk, base rent looks lower.  Generally applies to larger size deals.
Disadvantages – Customer doesn’t get fixed costs and are subject to unexpected cost of repairs and property upkeep.

[subtitle] Modified Gross [/subtitle] The provider, is responsible for the major expense items (taxes insurance, etc.) but the tenant is responsible for their directly related expenses (power).

Benefits – Providers and customers share the inflation risk, base rent looks lower than FSG and higher than NNN.
Disadvantages – MG is harder to administer.  Landlord takes on the risk for variable costs.    Works well for Turn Key datacenter space.

 

[subtitle3] Renewal Option and Holdover [/subtitle3]

Clients should always ask for and negotiate for renewal options with any new data center contract or lease.   While you may never have to exercise your option it is important to have it available.   You can negotiate for one or more options.  It is always best to obtain as many options as possible as you do not have the ability to back and negotiate using the same rates without an option in place.  A typical renewal might have one or two options, each for 3 additional years with a guarantee of rate 103% of the base rate for the last year of the previous term.

One final note, you need to be mindful of any required extension option exercise notice’s that may be specified.  You will need to submit this as a written request, usually within 9-12 months prior to the end of the initial term.  Failure to do so will void any remaining options you were offered.

 

How to Evaluate Data Centers by Asking the Right Questions

If you’re evaluating colocation and researching data center options you need to be fully informed. Fibertown has created a comprehensive checklist that Randall Cobb Authentic Jersey will make sure you ask all the right questions. From Randall Cobb Jersey power and networking to facility design and support services, you’ll learn how FIBERTOWN stacks Randall Cobb Kids Jersey up against the rest.

Some key things Randall Cobb Womens Jersey to consider when evaluating your data center provider:

  • Company Profile & Financial Stability
  • Power and Cooling Infrastructure
  • Security Protocols
  • Networking Options
  • Facility Design

 

[button url=”https://www.kiameshaglobal.com/evaluate-data-centers-asking-right-questions-fibertown/” target=”_blank” Randall Cobb Youth Jersey color=”blue” size=”large” border=”true” icon=””]Download Checklist[/button]

Data Center Relocations – Getting out Alive!

indiana-jones

A data center relocation is Jordy Nelson Authentic Jersey often a complex and daunting task that many IT executives simply dread.  One responsible for the success of a move that is being considered may have nightmares that resemble the memorable Raiders of The Lost Ark scene where despite all his study and planning, Indiana Jones retrieves the golden idol but moments later realizes he made a mistake in his planning and the site comes tumbling down.

There surely are many traps to identify and avoid with the physical move logistics, network re-homing, bandwidth sizing, power and cooling requirements, maintenance windows, hardware refresh/reconfiguration and the almost certain “unexpected” category.  On top of all this if one is moving out of a colocation site there is quite often a complex gauntlet of commercial and legal details to be measured and dealt with.

 

[subtitle3] Knowing what you have  [/subtitle3]

A thorough assessment of the physical, logical and network Jordy Nelson Jersey environments will pay dividends in assuring you are being efficient and effective in the overall migration planning and actual “go-time” move.  Even a well-managed environment may still have a few of those “we aren’t sure exactly what is running on that” type of pieces.  Building up a clear picture from the ground up, from the top down and from side to side with multiple eyes giving checks and balances may to some feel like overkill and Jordy Nelson Kids Jersey laborious.  However to those that have been through the pain of an under-planned migration making sure the environment and all its pieces are absolutely documented and measured is unquestionably an absolute must coming out of the starting gate.

 

[subtitle3] Internal Preparation & Communication [/subtitle3]

Once you know exactly what the environment is in its current state, preparing the organization, not just the IT team, for the move and the changes it will bring is critical.  Clear, organized communication is the key ingredient to getting this right.

IT Management Veteran Phillip Butler [icon name=”linkedin” size=”18px” link=”https://www.linkedin.com/profile/view?id=401442&authType=NAME_SEARCH&authToken=6_a9&locale=en_US&srchid=235768951410447854357&srchindex=1&srchtotal=299&trk=vsrp_people_res_name&trkInfo=VSRPsearchId%3A235768951410447854357%2CVSRPtargetId%3A401442%2CVSRPcmpt%3Aprimary”] who currently serves as the Infrastructure Manager & Architect at TPC Group (www.tpcgrp.com), a Houston-based chemical producer, has been through a lot of moves in his 25+ year career.  “Identifying all the internal application and business unit stakeholders that need to be communicated and coordinated with can definitely be a moving target that the migration team must be aware of.  Moves often take place over the wee hours of the night, on weekends and even holidays, so making sure your checklist of internal stakeholders and their availability for testing or troubleshooting during these periods is critical.”

 

[subtitle3] Working with the External Team Members & Components [/subtitle3]

A migration team for a move of any complexity requires plenty of external components and resources.

“Communication with both your reseller/integrator and the equipment manufacturer is key to avoiding the pitfalls of equipment damage and failure during the move.  Many manufacturers require a notice prior to moving major pieces of equipment.  This is necessary to retain warranty status and updated records if replacement parts are needed because of damage during the move.   There are even some who require diagnostics be performed both pre- and post-move”, says Butler.

“Insurance is critical and there are really three different documents you may need.  For example – If you are moving from one colocation environment to another both the data center you are leaving and the one you are going to will require certificates of insurance from any company who will access their premises.  This includes the movers, the company assisting in teardown and installation and of course your company.  And don’t forget to verify the moving company has enough liability insurance that will cover the entirety of what they are moving for you.

Detailed site and connectivity documentation will save you time.  In a recent move a company decided to upgrade the core network switch to get rid of an EOL component.  The pre-move switch connections and port assignments were not well documented and caused a long delay in returning the systems to service.”

Peter Morris, a Principal at Clark, Duncan, Morris [icon name=”linkedin” size=”18px” link=”https://www.linkedin.com/profile/view?id=63322503&authType=NAME_SEARCH&authToken=THV7&locale=en_US&trk=tyah2&trkInfo=tarId%3A1410446114368%2Ctas%3Apeter%20morris%2Cidx%3A1-1-1″] (www.clarkduncanmorris.com); a corporate move and data center relocation specialist, has plenty of good advice on this topic.  “When making selections of your partners and providers, it is VITAL you check references, and verify that this type of program or event has been done before and the client was pleased or can offer you industry peer feedback.  Not everyone is good at everything.  So position your team to their known strengths and shore up any shortfalls in experience or talent.

The biggest challenge we see is when a physical DC migration happens, that the players who attended the meetings, sold the project and made arrangements, are not the same when the move is happening.  This causes stress and a need for more external communication vial call or e-mail.  This eats up time which the client has to pay for in the end.  The next challenge is “we are here, it does not work!?”  You move to a new environment and things don’t connect.  This is a preventable problem with planning and new / redundant hardware and switchgear and a solid circuit provider.  This is a small investment for peace of mind and a stable network.”

 

[subtitle3] Environment Redesign & Existing Colocation Agreement Considerations [/subtitle3]

Kevin Knight [icon name=”linkedin” size=”18px” link=”https://www.linkedin.com/profile/view?id=10014302&authType=OUT_OF_NETWORK&authToken=SZJY&locale=en_US&trk=tyah2&trkInfo=tarId%3A1405561342186%2Ctas%3Akevin%2Cidx%3A1-5-5″], Vice President of Consulting Services for Jordy Nelson Womens Jersey Kiamesha Global (Jordy Nelson Youth Jersey target=”_blank”>www.kiameshaglobal.com) has been involved with hundreds of migrations into and out of facilities from both the customer and service provider perspective.

“The logistics of a move alone can be a full time job.  There will also usually be some level of technology refresh that will also take place concurrently.  A refresh will frequently require a redesign of racks, low voltage wiring, power and cooling.   As a result strict timelines will need to be established as much as 12-24 months in advance.

When moving out of an existing colocation site it is critically important to be aware of the time frames that were part of your original agreement.   Typically, you would have specified some or all of the following:

Holdover – This can be employed if you are unable to surrender the space at the end of the term.  The provider may allow you to stay in your space for a limited time after end of term at a significantly higher rate.

Method of surrender – Generally addresses that the space be left in working order and clean condition, ordinary wear and tear excepted.

Extension Options – Can provide one or more contract extension periods, typically from 12-48 months.  Possibly at a rate agreed to at time of contract execution.

Auto-Renewal – Often a non-decision by either the customer of the provider will trigger an auto-renewal which can be from 1 year or in some cases for an entire additional term.”

 

[subtitle3] The Network [/subtitle3]

From running a global, provider neutral communications agency since 1997, TeleSource Communications (www.telesourceinc.com) President Adam Myers [icon name=”linkedin” size=”18px” link=”https://www.linkedin.com/profile/view?id=2673408&authType=NAME_SEARCH&authToken=49Nj&locale=en_US&trk=tyah2&trkInfo=tarId%3A1410446063626%2Ctas%3Aadam%20%2Cidx%3A1-1-1″] fully understands the network pitfalls in these situations.

“A Datacenter is different from any other commercial property when it comes to network connectivity in that it is typically the one location that has an abundance of service provider options pre-wired with fiber due to its heavy consuming customers or tenants. With that being said, the Datacenter can be the most difficult place to install said services due to a laundry list of physical access control policies, internal wiring/infrastructure design requirements and unique responsibilities to complete each individual piece of the puzzle resulting in a green light for a customer.

 

Before going from an in-house Datacenter to an off-site colocation facility, it is critical for customers to assess their current (Wide Area) Network architecture, application requirements, unique capabilities, and functionality while taking into consideration how to provide the best ‘end user experience’ based on the overall cost, securely. Re-homing a network (to a Datacenter) is still re-homing a network. In summary, a lot of work to make sure you keep what you need, you leave the past behind you and are not written up by your manager for gross insubordination at 3am while running a trouble ticket with your ISP trying to change your DNS or MX records that have been hijacked by Ukrainian separatists!”

 

[subtitle3] Worth Getting Right [/subtitle3]

In this relatively short piece we’ve reviewed some critical items at high level but of course there is much more to consider.  For sure it is clear that the best approach to take is to avoid short cuts and lack of attention to detail.  Moving data centers can absolutely be managed with a great deal of success but it does take some real effort.  Indiana ultimately made it out on the other side with his prize after avoiding all the traps – however he still failed to keep it out of the hands of his competition.  That issue however is for another day and article!