Horizon – Urs Hölzle – Infrastructure for the Long Term

[MUSIC PLAYING] URS HOLZLE: Thank you. Way too much credit, Diane,
but welcome to Horizon. And as Diane
mentioned, we’ve been building a hyperscale cloud and
running it for quite a while. And it has over
eight applications powered by this cloud, with more
than a billion users, right? Search, Maps, YouTube, and so
on, and so on– and as Diane mentioned, the eighth one
is actually TCP Google Cloud Platform itself. So every day, Google Cloud
Platform applications– not by Google, by
our customers– touch over a billion different
IP addresses worldwide, every single day. So it’s really an
incredible scale, and that scale requires
an incredible investment. I’m very proud that the
10-plus years of investment– not just money, but
really expertise, very hard work by
a very large team– is being recognized today. And I want to start
off by just showing you how we’re different from
other clouds in a few ways. So the first one, Diane
already mentioned a little bit, is our network. It’s probably one of our
biggest areas of investment. We have hundreds of thousands
of miles of fiber that we own. The largest, fastest,
highest-capacity submarine cable in the world, for
example, was built by Google. And generally speaking,
you can reach, with this private backbone,
pretty much any place on Earth. So that really means,
pretty much, any place. So for example, we have physical
infrastructure in Bhutan. If you don’t know
where it is, look it up– great, really
wonderful country. Next, we also have operated
at scale for a very long time. And if you operate
on our scale, you kind of realize that efficiency
and performance is important. And so we have lots of
time and lots of expertise to really do that, and it shows. Another TCP first
is the in-migration. Just because we do
maintenance or just because we have
to do an upgrade, you shouldn’t need
to do anything. So on TCP, despite us
upgrading things all the time, your workload just
keeps running. And of course, we provide
great value– thank you, Lydia– up to 59% less
for the same workload, at great performance. And what’s more important, we
put the cloud back in cloud, because with permanent
billing, custom VM types, automatic
discounts, no worrying about reserved instances
or long-term contracts, you pay as you go. That’s what Cloud
was invented for. And so with our
approach to pricing– our user-friendly
approach to pricing– we’re just much easier
to do business with. So of course we’re
improving this product. Diane talked a little
bit about the launches. I want to highlight
just a few things that launched in the past few weeks. One was our SQL service, V2. It’s the highest
throughput SQL service on any cloud; automated storage
sizing for SQL, so you never run out of storage
space for your database, because we grow it
automatically for you; weather data for
BigQuery, if you want to correlate your
data with weather; new machine learning APIs; VM
rightsizing recommendations to save you money; lower
pricing for preemptable VMs; another first, resizing
persistent disks without downtime, while
their VM keeps running. So it’s just a few
of the improvements that we launched in just
the last four or six weeks. Now, we see incredible demand
from pretty much anywhere in the world. And to make sure that all users
have a cloud next to them, we’re going to roll
out new regions at a rapid pace in the UK,
in Germany, in Finland, and in India, and Singapore,
and Australia, and Brazil, and the last country,
northern Virginia. And so we’re actually
going to– and this is on top of Oregon, which
we rolled out earlier. Japan is coming next
week and next month. And starting in
2017, we’re going to roll out roughly one
new region per month. And we’re going to keep up
that pace for quite a while, so this is just the first
wave of our expansion. You’re going to hear more about
the next wave in the coming months. All of these regions are
going to be multi-zone, so you can run
high-availability applications in a single region. And they’re going to be
connected by the Google Cloud network, the largest private
backbone in the world. So your connectivity
is going to be fast, no matter where you are, and
no matter where your users are, because you can reach them
using the same connectivity that Google reaches that user. All right, now, I
told you that we’re building a different Cloud. And I want to tell
you a little bit more, but to do that, let
me go a little bit about a history of Cloud. So cloud, in many ways,
started about 20 years ago with colocation, right? Colocation let you rent
data center space instead of having to worry about
investing, building, maintaining, running
a data center. So that was great. And pretty soon afterwards
it went, really, to the second wave
of Cloud, where all of this infrastructure
was virtualized. So you have the same
components– servers, disks, networks, whatever,
load balancers. But now they’re
virtual, not physical. And that’s great because what
used to be a purchase order now becomes an API call. And poof, you have a new
machine, or a new disk, or a new whatever. And that really transformed
how people do IT. It’s a very, very deep impact. But it maintained the same
structure of the data center. Everything’s
virtual now, but you deal with the same components. And it’s just as hard
to put them together, and to maintain them, and to
actually keep things running, right? So that forces the
user to still focus a lot of their effort
on the wrong things. Do I have enough servers? Do they have the right type? Did I purchase too much
or maybe too little? Do I have to upgrade my OS? Do I have a security alert? All of these things, your staff
is forced to pay attention to, instead of writing
new applications, creating value, having
innovative applications. And innovative
applications is ultimately how businesses win,
how they get new users, how they get new
markets, now they make their employees happy. So until about 10 years
ago, we were actually in that same boat. Our internal infrastructure
looked about like that, and we were struggling
with all of that. And we realized that we
can’t really maintain our pace of innovation. We can’t really be Google
if we don’t get out of this complexity. So about 10 years
ago, we really started to redo our infrastructure. We started with a
container-based model, moving away from machines,
and with automated services and scalable data. And that lets our developers
enjoy unprecedented scale reliability, security,
without worrying that much about the
infrastructure behind it. And this is what we have brought
to Google Cloud Platform, and that many of you
benefit from today. Let me just give
you a few examples. So for example, Google App
Engine lets you write an app. You never have to
worry about servers. You never have to worry
about operating systems. You don’t even have to
worry about regions, zone outages, nothing, right? You can have an
ambitious application. You can run it for a year,
and you can have literally– and I that, literally– zero
hours on administration. Or take another
example, BigQuery, our analytics solution. There is no provisioning
of an analytics cluster. There’s no maintenance,
there’s no sizing, there’s no upgrading. Just bring your data
and start querying. Or Container Engine runs
your Kubernetes containers without you having to
really administer anything. Or even for conventional
things like Hadoop clusters, you can be up and
running in 90 seconds, have your Hadoop cluster,
zero administration. So NoOps is really a big step. And the third wave of
computing, I think, needs to focus on making
all of the power of Cloud accessible, without you worrying
about the details behind it. Now, Google was founded
with an ambitious mission– organize all the
world’s information. And that obviously required
us to become experts at handling lots of data, right? Just for Search, for Search
we have to store an index and then serve a private copy
of the entire web, right? And we’ve been doing
that for 15 years, right? We’ve had our first
trillion-line log analysis running, sometime
in the early 2000s. So we’ve been dealing with
big data for a long time, and that forced us
to really pioneer many of the tools and
techniques that are used today in big data. And we’re a data
driven business, and so this wasn’t just
about infrastructure. This was about
creating tools that lets many Google employees
analyze and understand the data that we have. So let me talk a little
bit more about that because externally, in
Google Cloud Platform, we took these internal tools
and made them available to you. And they are really making a
big impact to our customers. The key tool is BigQuery. BigQuery is our
analytics solution, and it lets you take pretty much
an unlimited amount of data– and I mean that literally–
and run SQL queries against it, and get results in seconds. Don’t have to worry about
scalability, or maintenance, or anything– no provisioning,
no maintenance, zero ops. So it’s really been a
game changer for many. And I’m very excited
today to announce that we have a major
new version of BigQuery available, BigQuery
for Enterprise. And one of the key changes
in BigQuery for Enterprise is that you can
now update tables. So that turns it from
an analytic solution into a full data warehouse. So BigQuery Enterprise comes
with many very nice features. I can mention just a few. Always mentioned
support for DML, so you can do updates,
deletes on your data. It supports ANSI SQL,
so all of your tools or pretty much all of
your tools will work out of the box with
BigQuery Enterprise. Heavy users can
get flat pricing, a predictable bill every month. I’ll talk a little bit
more about that afterwards. And of course, like
all of our services, you have identity
and access management that gives you security and
compliance for your data and access to your data. We have lots of other
features I can’t talk about. For example, just
like in Google Docs, we try to make
collaboration easy. So sharing queries is as
easy as sharing a link. And of course, being from
Google, it’s extremely fast. We asked a third party
to run some benchmarks based on the industry-standard
TCBH benchmarks. Here’s a graph that shows when
multiple users run a TCBH query number 10– no, number 22, on
10 terabytes of data, sorry. So when you have a few
users, both systems are reasonably fast. But as you increase
usage, the power and the automatic scalability of
BigQuery really starts to show. And at 32 users,
it’s 29 times faster because BigQuery can scale
automatically for you. And it scales independently
of the data size, so you get just the
right performance at a very attractive price. Let me actually
elaborate on that. So this high
performance does not come at the penalty of
paying through your nose. It’s not just easier to use. It’s also cheaper to use. Now, how is that? Here’s another comparison. This assumes that
you have a data warehouse that grows over time. And so you get not just
the higher performance that you just say, but you
get low predictable cost because BigQuery only uses
resources when you are actually running queries. So let me give you an analogy. It’s a little bit like
a light switch, right? So with Amazon Redshift, you
have to pick a room size, and you have to pick the number
of light bulbs in that room. And then you have to
turn it on, and you have to leave it turned
on for the entire month, whether you’re in the
room or not, right? And no surprise, your bill
at the end of the month is going to be
pretty substantial. In comparison, BigQuery is
kind of a little bit more like a motion-sensor
light switch. So the light goes on
when you enter the room. It goes off when you leave it. Magically, the room is
just always well lit, and the right size. And so no wonder your
bill can be lower at the end of the month. And so you get this scalability
and the zero administration at a much lower price
than comparable solutions. And so with that, it’s
really pretty easy to choose your Cloud
data warehouse, because really no other
data warehouse offers you a combination of
these functionalities and the choice
between pay as you go, which is ideal for light
users, and fixed pricing, which is ideal for heavy users. So that much about analytics–
and obviously, analytics is really, really
important, because it lets you take your data
and derive value from it. But often, machine
learning can actually derive even bigger value
from that same data. All right, to give
you an example, just a few months ago
we applied machine learning to our own data
centers and to the controls of the cooling system
of that data center. And we were able to save
up to 40% of cooling energy in our data centers
by applying machine learning. So it’s really terrific. The problem is that
machine learning has a high hurdle to adoption. You need a lot of expertise
and a lot of compute to actually make
it work for you. And our goal is to change that. Our goal is to democratize
machine learning and to really make it possible
for everyone to use it. And how we do that, I’ll show
you in the next few minutes. The key to it is Cloud
Machine Learning. And the easiest way to
use Cloud Machine Learning is to use complete solutions. So we launched a strong set
of machine learning APIs that solve common problems for you. For example, here’s
a sound file. Tell me what’s spoken
in that sound file. These are trained machine
learning models, trained by us, maintained by us, fully
come as a service. Basically, bring
your data and get knowledge, insight, out of it. These have been launched
throughout the year and have actually been some
of our most popular APIs ever. So usage is skyrocketing. But on top of having these
canned solutions that solve common problems,
you also need to be able to build
your own solution. And for that, we created
Cloud Machine Learning. It went alpha earlier this year. And today I’m
excited to announce that it’s now available to
everyone as Cloud Machine Learning Beta. It comes with a number
of great features. First, it should be
no surprise by now that it is a NoOps
machine learning solution. So you don’t have
to create a training cluster, or a
prediction cluster, or choose machine size,
or whatever, right? You just focus on your problem. And once you have your
machine learning model, you just install it, and
we autoscale it for you to arbitrary
demand– so zero ops to run and to train your models. Of course, you get
performance as well. So you can do training at scale. You get the full power,
the parallelized computing of the Google Cloud,
and so model size is not going to be a problem. It comes with a nice UI as well. You’re going to see
that in the demo. Datalab is kind of
like a notebook that lets you develop your
code, but also document it and share it very easily. And in Datalab, you can also
manage your training runs, your different versions. You can connect to BigQuery
or to Google Cloud Storage, so it’s easy to build
a full solution, not just a machine learning model. And last but not least, we
have a very special feature that we call Hypertune. It’s a little bit
esoteric, but when you build a machine
learning model, one of the most
important things is to get the model roughly right. But then, usually,
today that would be followed by days and weeks
of tuning, because this model has a lot of parameters. And you need to find out which
settings actually maximize the accuracy of your model. And Google Cloud ML Hypertune
does that maximization for you. So we try lots of versions
in the background. And you can
accomplish in an hour what, today, takes an expert
days or even weeks to get to. And you’ll see
that a little bit. It’s really an
incredible time saver. But rather than me
talking to slides, let’s actually see
that in action. I’d like to invite Rob
Craft up to the stage. And he’s going to
give you a demo of a few different ways, how
customers use machine learning. Rob, over to you. ROB CRAFT: Thank you, Urs. Thanks very much. URS HOLZLE: Yep. ROB CRAFT: Good morning. [APPLAUSE] So is my mic on? It is on, awesome. So what you’re looking at is
80,000 creative common images that we put through
our Vision API service. As Urs mentioned, this
is a fully trained model, based on a lot of
learnings we had creating Google
Photos, which does an amazing job about
understanding cognitively what you’re showing
it, in pixel form. So here we mapped 80,000-worth,
and I plugged it into a nice, pretty word cloud that gives
you an idea on how these things map, that makes sense to people. And then we have
a couple of images to the side that kind of shows
you a little bit of the power. So the first is
the recognition– and forgive me for the
analogy– but the internet is powered with cats. And here is a cat to prove it. [LAUGHTER] Urs has private islands
bought with cat money, I think, at this point. So there’s two things
that are important here. One is that we’re successfully
labeling, on the side, what the image is. And as you might suppose,
close up, far away, color of, long hair, short
hair, looking at camera, not looking at camera– you have
to have a pretty sophisticated understanding of
visualization systems to get this sort
of thing correct. But you want to know how
correct it is, because it could be a business decision. You never believe anyone
without data, of course, if you’re running a
successful business. So on the side here, we’re
giving you an indication of how certain we are. So we’re 99% sure it’s
a cat, and it’s 83% sure it’s a domestic short hair. We give you lots of
other information, like the color palette. For those of you with brand
sensitivity for colors, and textures, and feels for
your advertising campaigns, this matters a lot, and
it’s really hard to get. And we offer a
lot of other data, like is it
inappropriate content? 90% of the new data
arriving in your business is unstructured data like this. It’s voice, it’s audio,
it’s in a language that you don’t speak. It’s directly from a customer. It’s about your brand
on the internet, that you can’t find yourself. Systems like this drastically
change the capability to understand how people
are talking about you, how you would like
to be talked about, and as well as what
imagery matters. We also do it for the other
languages and capabilities. So let me do one more, and
I’ll flip over to another demo. This one is pretty exciting for
those of us that like sports. On the surface, for
those of us from the US, this is very obviously
a baseball stadium. A computer can’t
make that assumption, so we start from scratch. We don’t believe you when
you label this as a baseball stadium, for example,
because it turns out people aren’t always necessarily
truthful on the internet. I know it’s surprising,
but there we are. So it’s a stadium. It’s a team sport. We’re even picking up
some baseball equipment. You see the little picture in
the background, the guy holding the baseball bat? We picked up that it’s a
baseball bat, not just a stick. And here’s the part that
I’m especially proud of. Not only do we grab the
text on all the signage in the background– and we’re
offering bounding boxes soon, so you can pick what area of
the image you want to look at– but we identify it. Out of all the baseball
stadiums, this is Citi Field. So that capability
applied to landmarks, using the power
of what we learned from Maps and other systems,
becomes available for you. And to do one of these images
is literally six lines of code. So this vastly changes the
world around what machine can provide, versus
the work you think you have to do to
get machine learning benefits in your applications. OK, so we’re done. Everyone ready to do their
first neural network? Let’s do a neural network. This will be fun. So we’re going to start
with something that matters a lot for this room. Let’s predict
whether the S&P 500 is going to close up or down. In this case,
we’re going to take 50 years plus worth of data
across 3,200 stock tickers. We’re going to combine
it using some training information across
other pieces, pump it through a neural network. And out the other side is
going to come a prediction. This identifies what a lot
of people in your businesses are doing today, or that you’ll
be asked to do going forward, which is create your own model. So we’re using the
Google Datalab tool that Urs mentioned a
little bit earlier. It’s a notebook
that has compile. You can actually compile queries
and do visualizations natively in here. An example of a visualization
of the data set is here. You can see that it’s
a little bit noisy. So we’re going to pre-condition
some data to clean it up– much tighter
distribution on the data set, so we’re going to learn
faster with our model. And what does a model look like? Well, in this case–
pardon the lingo drop, but I need to show that
it’s a real thing– it’s a feedforward neural network,
4 hidden layers, 1,024 nodes per layer. We’re doing rectified
linear units between, and then we have a soft max
regression– long-winded way of saying this is
a reasonable demo. It ain’t production, so
please don’t do trading based on the numbers you’re
going to see at the bottom. [LAUGHTER] I said it. I am not responsible. So this is a prettier picture,
for those of us that forget everything that I just said. It’s a reasonable explanation
of what we’re trying to do. This is how terse the code is. The tooling and machine
learning is really, really hard. We’re doing it better. We’re making it more simple. The fact that it isn’t scribbled
on a napkin or a white board and erased as soon as
your data scientists leave the room is a big step
forward for the marketplace, and we’re going to continue
to improve the experience. Let me get to the money
and the power here. Use Cloud ML because it
takes a long time to train. Let me skip right
to the money shot. It’s 123 hours to do
what I just explained on roughly a developer
workstation-class machine. Come back in five days,
and you get your answer. Come back in five days,
and get your answer, and then begin iteration. Wait another five days, or
argue about it for three days and then come back
in another five days. Welcome to a machine
scientist’s nightmare. That’s the state
of the art today. By just changing
one line– you say, I want to run it on
premium instance. I don’t configure
any more machines. Come back in three hours. And if you want to go more,
we can go more if you’d like. Don’t contact me directly. We have people. So now you enter into
the interesting space of, it’s OK but not great. The accuracy isn’t what I want. It’s 60%, and I’m not
going to make a business decision on 60%. If only there were a wand with
some magic pixie dust in it. Well, it’s not quite
magic pixie dust. It’s a lot of PhD sweat and
tears that have gone into it. But Hypertune allows you
to automatically tune the aspects for the
model that may not be learning or asymptotic
towards the result that you’re asking for the
model to predict against. So through one simple
change of code– it literally is
one line of code. And if Rob can find it
quickly– there we go, jobs trainer task– off you
go, and you get the machine now spinning up multiple,
multiple instances, running multiple
experience at once, picking which one is maximizing
towards the prediction that you want, and
rendering an answer. So there is what you
need to do if you’re a data scientist with Cloud ML. I encourage you to compare
it to what your current data scientists get to use. It’s a lot different
proposition. So this is if you
create your own. Let’s see what someone
actually created themselves. So we have a used car
customer in Japan. And what they are trying to
do– and this is a real example. This is the model they
created, not Google. They allowed us
to demo for them– is they wanted to better
understand what they just bought at auction or what
they would like to put on sale at auction in Japan. So we have a whole series
of images to the side. Imagine I’m a used car salesman. I took a bunch of
pictures of the vehicle. And I just would like it to
go up on the auction site immediately,
without me tediously identifying what the image was,
front and rear, what the model was, getting it wrong. So let me go ahead and
select all of my camera reel. I’m going to drop it directly
onto the application. Let me minimize. And you’ll see, these
bits are actually going to a hosted instance
in our new Tokyo data center. But you see it actually
beginning to do some analysis. This takes about 30 seconds. And I’ve been
threatened with my life if I go too long, so let
me flip over to the answer that you’d see in 30 seconds. We’re 89% sure– we
being the used car company– this is a 2012
Toyota Land Cruiser Prado. That’s not available
in the US, by the way. It’s anywhere from $39,000
to $43,000, currently at auction, because we’re
combining other data sources in the background. And the images
have been correctly identified– right side,
left side, quarter panel, interior stick
shift, control panel, you get the idea– drastically
changing the experience. But it’s not machine
learning in your face. So you can train your own
models using pretty simple tools at Google Scale, and I hope
that you found that interesting. Thank you. [APPLAUSE] URS HOLZLE: Thank you, Rob. So that’s really great. You could see that actually,
as a company dealing with very physical
objects like cars, you can make use of much
more data with Cloud ML. This graph here
shows the adoption of machine learning at Google. And as people
discover the power, and as the tools get better, you
can really see usage explode. So today it’s used in pretty
much every Google application. And our goal with Cloud ML
is that the adoption rate in your company, of
machine learning, looks like this, because
tools are getting easier and more powerful to use. Let’s actually look at
another example, a real world example of one of our customers
using Cloud Machine Learning. And so please welcome
Mathias Ortner from Airbus. Mathias? [APPLAUSE] So for a little
bit of background, Mathias has been working on
image analysis for the past 15 years, for Airbus, a company
with 38,000 employees, 13 billion euros of revenue,
working in all kinds of areas in air and space. But today we’re talking
about satellite imagery– commercial satellite imagery. So Mathias, tell us a
little bit about the problem that you were facing. MATHIAS ORTNER:
Well, basically we are selling satellite
images, and we want to provide our customers
with high-quality images. And that means we don’t
want to have clouds within the image, because
if you have clouds the bottom is hidden, because
we’re looking from the sky– well, from space. And the customer won’t
pay for such an image, so what we are doing is
we try to figure out, in the end, if we have
clouds within each image we are selling. So OK, here you have an example. Maybe you can try to guess
or you have the solution. But you can see sometimes it’s
kind of hard to distinguish clouds from the sky. URS HOLZLE: That’s right. So the image actually shows
how difficult this is. This shows a bunch of mountains. Mostly, the white is snow. But actually, in
the upper right, if you look really
closely, it’s not snow. It’s clouds. So that actually sounds
like an interesting problem. I assume Airbus has been
working this for a while. Is this an easy
problem or a hard one? MATHIAS ORTNER: Well,
that’s the point. Actually, we’ve been working
on that for 20 years. And– [LAUGHTER] Well, we try to be very
smart, I have to say. But no automatic solution has
proven to be reliable enough. And every day, we have human
operators drawing cloud masks on every single image we
download from our satellites. And it’s lots, because
we are downloading thousands of images every day. So that’s what brought us
to use machine learning and try to figure
out what can be done with these new technologies. URS HOLZLE: That’s
a pretty big job. If you look at
just this picture, it will take you a while to be
accurate about what’s a cloud. And then just imagine
thousands of pictures. So I can see how you’d want
to have an improvement, and why you’d try
Cloud Machine Learning. Well, did it work for you? MATHIAS ORTNER: Well, yeah. Well, I started to use Machine
Learning like five years ago, and we had some success
on several problems. And we started to
use Cloud Machine Learning on this specific
topic, beginning of this year. And we had tremendous
results, so yeah. It works very well. URS HOLZLE: Can you
give us some numbers? MATHIAS ORTNER: Yeah, sure. Well, our previous
automatic solution had an error rate around 11%. So that’s a lot,
and that’s why we need these human
operators to just draw cloud masks on the images. And we achieved a 3% error
rate using TensorFlow and Cloud ML, which is much better. We now have a solution
for our problem, so this is the new situation. I think what is most
impressive for us, and for everybody in this room,
is it took us only three months to reach this point, which
is much faster than what we used to have in
previous developments. I think it’s really impressive. URS HOLZLE: That is
really impressive, yes. How was your experience with
Google overall, not just Cloud ML, but Google
Cloud, per se? MATHIAS ORTNER: Well,
I’m very positive. Our people were very reactive. Both commercial and technical
teams were really helping me and our teams to just get
focused on what we have to do. And there’s [INAUDIBLE]
our business. I mean, really a great
experience– people were very dedicated, hard workers. What can I say? URS HOLZLE: Thank you. So what’s next for
you and for Airbus? MATHIAS ORTNER:
Well, first of all, we are now
investigating how to put this algorithm into production. That’s the first step. And now we have seen that
you can solve all problems, we have plenty of ideas. We want to bring added
value to our customers. And we will for sure use machine
learning in several products in the coming years. URS HOLZLE: OK, so
you’re probably– clearly it was successful for you. And you started using
the alpha version, so you’re a few months
ahead of everyone else who can try the beta today. If there’s one or
two things that you would give as advice to our
audience, what would you pick? MATHIAS ORTNER: Well, I
think something really new is happening now. And this problem, I’ve been
working on it six years ago, three years ago, and we
couldn’t find any solution. And now machine learning
can solve a problem that couldn’t be solved before. That’s the first point. And I think it’s not
only about bringing new services or new stuff, it’s
also about our day-to-day job. Our engineering work is now
changing, and going further quickly. URS HOLZLE: All right,
thank you very much. That’s really amazing,
congratulations. [APPLAUSE] MATHIAS ORTNER: Yes. URS HOLZLE: So I just want to
recap that, because it’s really an amazing story. There’s a hard problem,
very clearly, right? It would be hard
for any one of us to go and pick clouds
on these images. They’ve been working on it
for literally two decades. And in three months,
with our alpha version, Mathias was able to beat the
existing system by a factor of four in error rate, night? That’s truly amazing. And I actually want to
leave you with one more thought, because those of you
who are old enough to remember might remember that about
10 years ago, Netflix had a competition with a $1
million prize to researchers out there– anyone could
participate– to predict what movies you’re going to like,
based on a set of existing movie ratings. And the best
researchers, hundreds of teams across the world,
competed in that competition. And after three years, someone
won the $1 million prize. And the amazing thing is that
today, with Google Cloud ML, in one day you can
build a model that would be competitive
in that competition. And I bet that in
under a month, you could build something
that would win, right? So that is how much
the field has changed. Just 10 years ago, or
less than 10 years ago, only the world’s best experts,
struggling for three years, could do something. And today, actually,
you have a fair chance to meet or beat these
results, because the tools have gotten so much better. So if you’ve been thinking
about machine learning, now is really a good time
to start trying it out. So now let me just
recap a little bit what we talked about today, right? I hope I convinced
you that we’re building a different
kind of cloud, one that’s really oriented around making
it much easier to use all of the capabilities, moving
away from virtualized hardware to scalable services,
scalable data. And I hope I’ve shown you
that with the new version of BigQuery, scalable
data got even easier. You have a data
warehouse in a cloud, zero admin, infinitely
scalable, at a superb price and with superb
performance, and your choice of whether you want to have
variable pricing or fixed pricing. And then beyond that, I
hope that Mathias just convinced you– or
the other demos– that machine learning,
really, today is within reach
for most companies. If you use Google Cloud ML, even
in its beta version, in fact, even in its alpha version, it
can deliver incredible value to you. And obviously, we’re spending
an enormous amount of time on making these tools and
the infrastructure behind it better every single day. So it only will get
easier in the future. Now, so that’s the true power
of this next-generation Cloud, because you can focus on
insights, on creating value from your data, not on
orchestrating and administering all that stuff.

4 thoughts on “Horizon – Urs Hölzle – Infrastructure for the Long Term

  1. Google is running shit scared of AWS! Google Cloud started in 2012 (after an ex-Amazon employee Steve Yeggie vented at Google as a Google employee), AWS started in 2006, MS Azure in 2010.

  2. 03 March 2017: http://venturebeat.com/2017/03/03/google-cloud-launches-vm-instances-with-64-virtual-cpu-cores/ What was that about a different Cloud? Hah.

Leave a Reply

Your email address will not be published. Required fields are marked *