Some tech is getting pricier and looking a lot like the older services it was supposed to beat. Namely video streaming, ride-hailing, and cloud computing.
I agree with the first two, but the cloud one was conflating a bunch of different things, concepts, and problems.
I have not seen cloud prices rise over the last 10 years in cloud architecture- quite the opposite. Storage gets cheaper all the time, instances get more powerful for the same rate, etc.
The article says cloud but then it lists of a bunch of software as a service SAS offerings. Yes the run on the cloud, and probably have gotten pricier, but office 365 is not the cloud, it’s a shitty subscription software service.
It is true that for certain extremely large or high volume clients, the cloud can be a lot more expensive. I’d argue the prime thing incurring costs here is data egress of which cloud vendors make a ton of money on. Hosting your own Netflix on AWS is gonna cost a LOT.
But, let’s look at it another way- you’re a small business previously hosting in your own office. You have a number of clients that need different volumes of storage and cpu. To complicate it further, some of your clients have busy periods, either daily or weekly, or maybe at random where they have 10,000% more traffic during a several hour spike than the rest of the time.
So, first off, you need to buy enough servers to handle all your clients peak loads. Maybe that involves spending $4,000,000 up front in hardware, racks, cooking, facilities, etc etc etc. now someone has to install all that stuff, maybe you have someone on staff that does this, he works nights and weekends to get it all in there and let’s say he costs another $100k annually plus benefits. But realistically, you need more than just the one guy for that much kit, so let’s say you have 3 or 4 of him.
Now, you’re up and running smooth, but it’s been about 4 years and now your fleet is approaching end of life. You have a lot of enterprise drives to replace, you need new CPUs and servers, clients are asking for GPU instances and 10 gig fiber, some have SANs in the petabyte range and things are getting complicated.
So you opt to move into a data center where you can offload your physical security, cooling, power and space to someone else. Things are good again.
Now some of your clients want to go multi region for compliance reasons and your biggest data storage client starts requiring cold storage with quick recovery SLAs, a few more 9s of durability, and mirroring to the west coast and Europe.
Your model just got a lot more fucking complicated and expensive and you now need to hire a dozen more people and have agreements with multiple data center locations, etc etc etc.
Or
You could be just 1 guy with terraform/cloudformation, some nice automation and a multi account org structure doing all the same things and more, but without all the salaries, support contracts, vendor SLA agreements, drowning in audits, etc etc etc.
Do you pay a premium for having all those things? Sure. But do you save a ton on workforce, compliance, time, and get access to world class data center power, security, durability, IO, availability, flexibility, scalability with no capital expense and controlled completely through IaC? Also yes.
Using Dropbox/Netflix transition into their own data centers is a terrible example because they’re literally the biggest storage and data usage companies in the world, using literally multi digit percentages of all internet IO. Of course something that big can save money by cutting out a middle man and therefore markup.
That represents like .000001% of people actually using cloud services to host, the vast majority of which saved tons of money, time, and effort by making all the details someone else’s problem and focusing on business use-cases instead of endless logistical ones.
For the small office, AWS, i.e “cloud” is definitely easy and economical, however the promised economies of scale are not easily realized in larger organizations. There are a number of reasons for this, but two of the main ones are that the provider’s interests are aligned with the subscriber spending as much as possible on compute, storage and I/O - and most subscribers, especially the larger ones, are notoriously bad at properly measuring, managing and optimizing these resources. Additionally, the promises of manpower reductions are overblown in the glossy slides that the C suite sees. Sticking your computer in somebody else’s data center saves a bit of upfront grunt work, but you still need everybody else from the sysadmin up to deliver the service.
The transition is inevitable of course, as organizations globally of all size rush to concentrate their compute and storage infrastructure into 3 major providers and get data centers and bare metal off their balance sheets. The premise that these providers will jack up prices once they have enough control of the market seems reasonable based on where we are today. AWS now charging for public addresses and increasing the cost of their Email Service may just be the beginning of what they can get away with. If there is a way to squeeze out smaller providers completely they will definitely find it.
I regularly support clients where reserved instances, compliance, and auto scaling by themselves translate into millions of dollars of annual savings over on-prem.
Our biggest data storage client has over 7PB of s3 and has legally obligated retention and destruction policies. Spread across about 100 different projects, having central management of that alone saves probably $100k a month in auditing, class transitioning, purging, and inventory just in man hours.
I oversee one client with over 200 aws accounts in their org, and I’m able to do it solo- in the 90s that involved dozens of people to support the hardware and networks alone.
You’re not wrong that orgs can do it badly and don’t understand how to leverage services, but that also doesn’t mean it can’t be done well. You know how much a NIST 800-53 ATO costs in labor hours alone in a large org? It’s $$$. Cloud tooling automated so much of that and largely eliminates all the hardware and physical control components entirely.
Plus when you consider govcloud and fedramp, that’s stupidly hard to do on your own, but with cloud you get those things as built-ins.
For auto-scaling to realize material savings, the variation in the workload needs to represent a significant change in the production footprint. Many applications in the Private Sector now being dumped into relatively expensive compute and storage cloud services don’t have that profile. A handful of virtual servers inside a corporate data center with an internal user base is usually uneconomical to refactor or replace with a lower-cost footprint, at least for now.
I agree with the first two, but the cloud one was conflating a bunch of different things, concepts, and problems.
I have not seen cloud prices rise over the last 10 years in cloud architecture- quite the opposite. Storage gets cheaper all the time, instances get more powerful for the same rate, etc.
The article says cloud but then it lists of a bunch of software as a service SAS offerings. Yes the run on the cloud, and probably have gotten pricier, but office 365 is not the cloud, it’s a shitty subscription software service.
It is true that for certain extremely large or high volume clients, the cloud can be a lot more expensive. I’d argue the prime thing incurring costs here is data egress of which cloud vendors make a ton of money on. Hosting your own Netflix on AWS is gonna cost a LOT.
But, let’s look at it another way- you’re a small business previously hosting in your own office. You have a number of clients that need different volumes of storage and cpu. To complicate it further, some of your clients have busy periods, either daily or weekly, or maybe at random where they have 10,000% more traffic during a several hour spike than the rest of the time.
So, first off, you need to buy enough servers to handle all your clients peak loads. Maybe that involves spending $4,000,000 up front in hardware, racks, cooking, facilities, etc etc etc. now someone has to install all that stuff, maybe you have someone on staff that does this, he works nights and weekends to get it all in there and let’s say he costs another $100k annually plus benefits. But realistically, you need more than just the one guy for that much kit, so let’s say you have 3 or 4 of him.
Now, you’re up and running smooth, but it’s been about 4 years and now your fleet is approaching end of life. You have a lot of enterprise drives to replace, you need new CPUs and servers, clients are asking for GPU instances and 10 gig fiber, some have SANs in the petabyte range and things are getting complicated.
So you opt to move into a data center where you can offload your physical security, cooling, power and space to someone else. Things are good again.
Now some of your clients want to go multi region for compliance reasons and your biggest data storage client starts requiring cold storage with quick recovery SLAs, a few more 9s of durability, and mirroring to the west coast and Europe.
Your model just got a lot more fucking complicated and expensive and you now need to hire a dozen more people and have agreements with multiple data center locations, etc etc etc.
Or
You could be just 1 guy with terraform/cloudformation, some nice automation and a multi account org structure doing all the same things and more, but without all the salaries, support contracts, vendor SLA agreements, drowning in audits, etc etc etc.
Do you pay a premium for having all those things? Sure. But do you save a ton on workforce, compliance, time, and get access to world class data center power, security, durability, IO, availability, flexibility, scalability with no capital expense and controlled completely through IaC? Also yes.
Using Dropbox/Netflix transition into their own data centers is a terrible example because they’re literally the biggest storage and data usage companies in the world, using literally multi digit percentages of all internet IO. Of course something that big can save money by cutting out a middle man and therefore markup.
That represents like .000001% of people actually using cloud services to host, the vast majority of which saved tons of money, time, and effort by making all the details someone else’s problem and focusing on business use-cases instead of endless logistical ones.
For the small office, AWS, i.e “cloud” is definitely easy and economical, however the promised economies of scale are not easily realized in larger organizations. There are a number of reasons for this, but two of the main ones are that the provider’s interests are aligned with the subscriber spending as much as possible on compute, storage and I/O - and most subscribers, especially the larger ones, are notoriously bad at properly measuring, managing and optimizing these resources. Additionally, the promises of manpower reductions are overblown in the glossy slides that the C suite sees. Sticking your computer in somebody else’s data center saves a bit of upfront grunt work, but you still need everybody else from the sysadmin up to deliver the service.
The transition is inevitable of course, as organizations globally of all size rush to concentrate their compute and storage infrastructure into 3 major providers and get data centers and bare metal off their balance sheets. The premise that these providers will jack up prices once they have enough control of the market seems reasonable based on where we are today. AWS now charging for public addresses and increasing the cost of their Email Service may just be the beginning of what they can get away with. If there is a way to squeeze out smaller providers completely they will definitely find it.
I regularly support clients where reserved instances, compliance, and auto scaling by themselves translate into millions of dollars of annual savings over on-prem.
Our biggest data storage client has over 7PB of s3 and has legally obligated retention and destruction policies. Spread across about 100 different projects, having central management of that alone saves probably $100k a month in auditing, class transitioning, purging, and inventory just in man hours.
I oversee one client with over 200 aws accounts in their org, and I’m able to do it solo- in the 90s that involved dozens of people to support the hardware and networks alone.
You’re not wrong that orgs can do it badly and don’t understand how to leverage services, but that also doesn’t mean it can’t be done well. You know how much a NIST 800-53 ATO costs in labor hours alone in a large org? It’s $$$. Cloud tooling automated so much of that and largely eliminates all the hardware and physical control components entirely.
Plus when you consider govcloud and fedramp, that’s stupidly hard to do on your own, but with cloud you get those things as built-ins.
For auto-scaling to realize material savings, the variation in the workload needs to represent a significant change in the production footprint. Many applications in the Private Sector now being dumped into relatively expensive compute and storage cloud services don’t have that profile. A handful of virtual servers inside a corporate data center with an internal user base is usually uneconomical to refactor or replace with a lower-cost footprint, at least for now.