PAYMENTSfn 2019 – Surviving Black Friday: Tales from an e-Commerce Engineer by Aaron Suggs

PAYMENTSfn 2019 – Surviving Black Friday: Tales from an e-Commerce Engineer by Aaron Suggs


(upbeat music) – How are people feeling right now? (audience cheering) All right, all right, a
little more, a little more. (audience cheering) All right, fantastic. Yeah, so I wanted to start
out by saying thank you to Peter and Helen and Spreedly and all the folks who helped organize and put on this conference. This is my second year being here, and I think it’s a really
great local community and a great payments resource
that y’all are building here. I wanted to say thanks for that. This is an original talk that I’m giving, and so I’m excited to drop
some fresh content here. Let’s make sure my clicker’s going. Yeah, I’m Aaron Suggs. I go by @ktheory on the Internet, that’s Twitter and GitHub. Feel free to @ me and slide into my DMs. I’m director of engineering at Glossier, and one of the teams I lead
is our tech platform team that’s responsible for site reliability, infrastructure, performance, security, and developer productivity. Before Glossier, I was doing payments and
platform ops engineering at Kickstarter, like Peter mentioned. For those of you who might
not be familiar with Glossier, let me tell you a little
bit about what our brand is, what our platform is. We’re a direct to
consumer e-commerce brand in skin care and makeup
and beauty, that category. If you identify as a woman and you use Facebook or Instagram, there’s a pretty good chance that you’ve seen some of our ads. What makes us special and
unique as a beauty company is that we are really good at
listening to our customers. The company started out as Into The Gloss, a blog by Emily Weiss, who was interviewing women
about their beauty routines, sitting in their bathrooms, talking about their skincare
regimen and that sort of thing. Then we have what we call our G-team, our customer experience team, that’s really interactive on Reddit and Instagram and all these places where there is a really solid community of people talking about
beauty and skin care. You know, the values of the company is skin first, makeup second, as in taking care of
yourself and being healthy and having good hygiene is a necessary thing that
everybody wants to do, and then makeup can be an optional second. Just in terms of business metrics, we don’t share a lot of public information about the financials, but in 2018, we did over
$100 million of revenue. That is growing very quickly. It’s a mix of both the e-commerce website and some permanent retail locations and some pop-up retail locations we do. We’re about 200 employees. 30 of them are software
engineers on the tech team, and just because people are
interested in our tech stack, it’s Ruby On Rails using Solidus, which is a fork of Spree e-commerce. We use Stripe as our payment gateway, and it’s all on AWS. I wanted to talk about our
Black Friday experience. This is for people who are maybe not super familiar with US consumer industry. This is a consumer bonanza
in the United States. It’s the Friday after Thanksgiving. We think of this as
our peak holiday period extending from Friday, Saturday, Sunday, through Cyber Monday, and it’s like those four
days can often represent an entire month’s, a typical
month’s worth of revenue compressed in that short time period. For Glossier specifically,
we run a 20% off promotion. This is pretty uncommon for us to do, do discounts or promotions at all. And so, the consumers are really eager to get our products during this time where they’re on sale. Having this huge surge of traffic when a lot of new customers
are looking at your site presents a unique scaling challenge to deliver reliable e-commerce experience. Visually, this is what our
order volume looked like on Black Friday 2018. You can see that once we get to Friday, we’re doing about 10
times our normal traffic. Friday is the biggest day, and on Monday, that is our
second biggest day of the year, and then the Saturday
and Sunday in between are still way above typical
daily volume, order volume, and are our third and fourth
largest days of the year. This talk is a narrative
about how we prepared, how it went, and what we
learned from that experience. They say people don’t wanna
see how sausage gets made, and this is a talk
where I’m gonna show you how sausage gets made. I invite you to celebrate our wins or enjoy the schadenfreude or
commiserate in our failures. You know, it’s not all smooth sailing. It wasn’t all rosy. This is a PagerDuty alert
that went out at 12:07 a.m. seven minutes into Black Friday. “The site is under high load,
would appreciate assistance.” It’s a really interesting story of what happened in this case. We did a lot of things well, but some things didn’t go as planned, and we learned a lot. Let’s rewind. (mimicking mystical bells) We’re gonna go back to September of 2018 when we decide that we
are gonna get serious about preparing for Black Friday. We know it’s coming up. The first thing we decide
to do is make a team. This is on the technology side of things, on our logistics and customer experience and product development side of things. They’ve been planning
for this for a long time. The tech team, we were starting to face
this challenge head-on, saying, like, we’re at a scale where we can’t just all be working on user-facing features. We have a lot of shared infrastructure and cyber liability concerns. This is when we set aside four engineers, name a directly responsible engineer, directly responsible
individual, that was myself, to just say, “You are the
point person for ensuring “that our site is going
to work on Black Friday.” The long-term mission of this team was more broad than that. Empowering our other tech teams to quickly deliver
reliable product features is our general tech platform mission, but the narrow, specific short-term thing was just make sure we’re
up for that peak traffic. Okay, so we made a team. Team has the skills that we
need to address the challenge. Next step was to make a plan. This was to be, we were gonna be more
rigorous and systematic about fully answering the question, what should we expect to
happen during this peak period, and have we covered all
the bases that we should in order to make sure that, in order to gain confidence
that things will go as planned? And then you execute the plan. And now, sometimes I look
at this, and it’s like, is this too trite? It seems so simple and
obvious, like, duh, okay. You can make a team and make a plan, and it sounds so easy to have
these three-step programs. But I also wanna say, not all three-step plans
are created equally. It’s pretty easy to have
a simple-sounding plan that actually turns out to
be completely ineffective because it’s relying on poor assumptions or wishful thinking. To this end, I wanted to shout out this book recommendation, here, Good Strategy Bad Strategy
by Richard Rumelt. This book has really helped
clarify my own thinking and several of my colleagues
and friends’s thinking about how to clearly state a challenge and develop coherent steps to address whatever business challenge
you’re trying to face. So, we have this team with the skills to address the question of how do we provide this
reliable e-commerce experience on Black Friday. We’re making a plan, but what is the plan? The plan is capacity testing. Doing capacity testing of your system forces you to understand how it behaves in different scenarios. I’ve seen some teams do this in incomplete or ineffective ways, so I wanted to explain what I see are the three necessary,
important ingredients of doing good capacity testing. Number one is define your target. You have to know what you’re aiming for and what success looks like when your system is operating well. This forces you to make a
prescription describing, or a prescription about
what your system ought to do rather than just doing the description of how your system behaves. Next step, you measure what
your actual capacity is. This is really helpful to just know the limits of your system. This is the descriptive part. And then the final step
is you remove bottlenecks until you meet your
capacity target, right? And so, steps two and three here are in a programming loop, right? You just keep looping
back in between measuring, removing bottlenecks, measure again, until you meet your capacity target. Okay, so let’s dig into how you define a good capacity target. First off, this is a very
collaborative experience. Even though the tech team is doing a lot of the work on this, figuring out what is the target, what should we expect on Black Friday, that is a very cross-functional
and collaborative effort. In particular, the data team and the
marketing team in our case brought a lot of context and expertise to helping to come up with a good target. And because there’s this prescription of what you’re aiming for, it focuses organizational
alignment on the same goal. In our case, what we decided
to focus on for our targets were the peak orders per minute and the peak pages per minute across the three most important
types of pages on our site. That’s the homepage, PLPs, and PDPs. If you’re not familiar
with the e-commerce jargon, PLP is a product listing page, like a search result where you just see a whole bunch of products in a list. PDP is a product detail page where you’ve clicked on the product and you’re seeing everything
about that product, the reviews, ingredients,
or measurements or whatever, that kind of thing. And so, for most e-commerce sites, those three types of pages,
PLPs, PDPs, homepage, are the most important experience. This captured the customer journey that we were expecting
during peak traffic, right? Somebody lands on the homepage, they click around some PLPs, maybe they click a few PDPs, they add some stuff to their cart, they click around these
three pages some more. Eventually they start to check
out and they do an order. And then knowing from
just my understanding of how our website works, we knew that the peak order volume, whatever minute was the high water mark of what we should expect
in terms of volume, that’s what we needed to
make sure we could sustain, and then the rest of the time, when the order volume is less than that, you know, we’re sitting pretty. Okay, but like, what other information did we
bring to defining this target? Fortunately, ’cause the
company’s been around for more than one Black Friday, we had prior information
that we could look at. We said, okay, let’s look
at this window of time, the few months preceding that
peak holiday traffic from 2017 and basically assume a
same proportional change. We can say, what was the proportional jump from, say, a random weekday in November to the peak Black Friday traffic in 2017? Now we can assume that it’s
gonna be about the same here. Looking at the shape of our traffic, we were expecting this
nice, gentle hill of, this was the expected customer behavior where we see steady growth
throughout the morning. This was organically growing traffic, and then it would peak
around, say, 11 a.m. or so and then gently taper
off throughout the day. This was distinctly different than a big countdown to a flash sale, where it’s just very low
sales, very low sales, and then, boom, right at a certain moment, there’s a huge spike. That is a harder thing to plan for and ensure that you have
the sufficient capacity for, whereas when you have
that nice, gentle peak, you have time to react to whatever, to all the volume that is building up. Let’s just put a pin
in this important point because it might come up in the future. (audience laughing) All right, so, we’re planning
for that gentle peak. Okay, so step two. We wanna measure our
actual capacity, right? This is something that the
tech platform team could own. We used a service called flood.io. It worked out great,
they were very helpful. You write a TypeScript file that models the flow throughout your test. We decided to use a production
environment because, you know, there was this question of, should we use staging or build some other
production-like environment? We really didn’t wanna get in the place where our tests weren’t realistic because we made some bad assumption about whether our test environment was sufficiently production-like, so we decided we’re just gonna
use the same infrastructure. But in order to do this, we needed to enable the
sandbox account in Stripe, because there’s no credit card that would actually let you place thousands of orders in a couple minutes and have those go through. We made a little customization
to allow us to do that. We ensured that these real orders were not actually
fulfilled by our warehouse. These were orders paid
for with fake money, and so, we didn’t wanna
actually deliver them. And then we also had, we made a little exception
to our business reporting that we would exclude
these orders as well, so the marketing team didn’t
look at our capacity testing and say, “Wow, our conversion
rate is through the roof!” I wanna say kudos to our data team for having this built
into our data pipeline to automatically exclude
certain email regexes from our business reporting dashboards. Oh, pro tip. Before getting into this, I asked the team, I was like, “What do you think our current capacity “is actually gonna be?” And then the winner got the
baked good of their choice and the most ridiculous
hat I could find on Amazon. This was a really nice
moment because, you know, we were actually kinda pessimistic about what our volume would be, and the person who had
named this super high, it seemed almost outlandishly high, they were actually the most accurate, and it is because they knew that we had really optimized
the promotion logic that had been a bottleneck
previously in the year. Oh, another bonus of doing
this capacity testing is just to get our
capacity testing to work, we sussed out a bunch of
bugs and race conditions that had been affecting a few
orders throughout the year. This really just was a bit
of hardening on our system in order to even support
the capacity testing. Okay, so we’re measuring our capacity. Now step three is you remove bottlenecks until you meet your capacity target. There are really two big
ways to do this, right? Either you can scale up, add more servers, add bigger servers. You can just add more capacity that way, or you can optimize, which is instead of spending money to get more server resources, you spend engineering effort to make your system work more efficiently. There’s a trade-off of, we did some of both, honestly. I wanted to call out this trap of, you cannot just improve any performance aspect of the website. You’re only adding capacity
if you improve the performance of something that’s the bottleneck. Let’s say for example you make your JavaScript payload smaller by removing some dependencies. Unless that network bandwidth of downloading the JavaScript
was the bottleneck, which it probably isn’t, you haven’t actually
increased your capacity, but you have increased performance. It’s like, you’ve done something nice, but you’re not necessary, you have to be careful to make sure that you’re addressing the bottleneck if you are trying to
increase your capacity. Okay, so, a little framework for how to identify bottlenecks. This is coming from some experience with systems thinking and just trying to chase
down a lot of bottlenecks from time to time. You pick one of each of these two columns that I’m gonna show. First one is computing resources. Every server, you know, has CPU, it has disk
IO, it has network IO, and then you have all the tiers
of your application stack, your load balancers or
application servers, your databases or whatever. Whenever you’re making a web request and you’re waiting for that to come back, you’re waiting, what you’re waiting for is one of these compute resources on one of these system tiers. Finding that bottleneck is
just chasing where you’re like, okay, now we’re waiting
for CPU on the app servers. Now we’re waiting for disk
IO on the database servers. Whichever one is taking the most time and is the easiest to get rid of in that request response lifecycle, that’s what you want to be addressing in order to improve performance and improve the capacity of your system. Second book recommendation, Thinking in Systems by Donella Meadows. This has been a really helpful book for me to clarify how to model and understand the behavior of complex systems. You know, I think a theme
of this conference has been payment systems are
really complex systems, and this has been a great
way to break it down. All right, so, quick recap. Capacity testing, three easy steps. Define your target. That’s collaborative and cross-functional. Measure your capacity,
flood.io helps a lot there. And then, remove bottlenecks until you meet the capacity target. Couple more things that
came out of this process that were pretty helpful, a little bit fleshing out
the plan a little bit. We happen to know a familiar contractor who had optimized a bunch of
checkout systems previously, and so we hired that person,
and it was a big help. I wouldn’t say that’s super generalizable, but if you can hire good talent, do it. We put all the copy and promo code changes for the Black Friday launch
behind a feature flag. That means we were testing it
in production amongst staff for weeks ahead of time. The actual go live at midnight was the most trivial code change. It was just flipping on a feature, and it was all code paths
that we’d been testing for, like the capacity testing beforehand. It’s so nice to be able to
de-risk big changes like that. We also had a really solid
internal communication plan. For example, we made a
dedicated Slack channel that everybody would be on. This was the tech team,
logistics, our retail team, it’s customer support. The data team is all in
there talking together, having quick decisions,
quick context sharing about what was going on on Black Friday. We made a special PagerDuty alert that anybody could just send an email and page out several engineers all at once to make sure that there
was quick attention on any issue that came up. We weren’t planning on using that. And then we also made an
hourly on-call rotation for throughout the weekend. Fortunately, we have a bunch
of engineers in Canada, and on Canada, Black
Friday is just Friday, so they were on call throughout the day. Saturday and Sunday it was like, we’d all pick a couple, an hour or two that we
would be on call for, and we were really sitting at our desks knowing that this was a
critical time for the website. Okay, so the results of
our capacity testing. Where did we end up after
doing all of this work? The lowest number we had was
the expected traffic volume. Then we had set our target
a little padded above that, and then we even exceeded
our target by 2X to 4X on certain metrics. We thought we were sitting pretty, there. Then we knew what our bottlenecks were at our capacity that we measured. For the checkout rate, our bottleneck was database CPU. A lot of that came from
inventory accounting. This is atomically decrementing inventory as you add it to cart or checkout. And for page views,
homepage, PLPs, et cetera, it’s application CPU, it was just limited by how many
app servers we were running. Knowing those capacity, we could even, as an extra backup plan, prepare some mitigation techniques, right? If that database CPU was a bottleneck, we could disable inventory
accounting per SKU, and disabling inventory, this really means we’re switching from a strongly consistent inventory tracking to a weak, eventually consistent one where we would look at the
recent line items we’ve sold and ask ourselves, do
we have enough in stock? This was, because of how our business is, this was easy to do, or
this was possible to do because we knew we had plenty in stock and we weren’t gonna sell out. This was okay, it was just a little gotcha
for this program called BOPIS where you buy online, pick up in store for New York retail locations, and this is a really popular thing to do, but we just have much lower inventory in our retail locations
than in our stores, so we decide that we
would leave this enabled, we would leave the strongly consistent inventory tracking enabled
for that BOPIS experience because we’re so over capacity and we don’t expect that we’re even gonna need this capacity. And so, if we needed
to add more app servers to scale up the app CPU, we knew that took about 20 minutes to do. We also could vertically
scale our database. That takes about 15, there’s a Postgres RDS database on AWS. That takes about 15 minutes
to bring up the new server, but your site’s available
during that time, two to five minutes to reboot. All right, and then we
had various feature flags to disable anything that we
didn’t absolutely need, right? Phew, okay. We had this rock-solid plan,
tested the shit out of it. We got this, right? Black Friday comes, 12:07,
get the PagerDuty alert. What’s going on? Oh, my goodness! All right, let’s go back to 10 p.m. on Thursday, Thanksgiving, and we send out this email. Subject line is T-Minus 2 hours. I wouldn’t be surprised if
some folks in the audience got this email, and we said, “Oh, you know,” the call to action here is to add something to your calendar, which isn’t the best call to action, but we’re just like, “Oh, you know, get psyched about this deal “that’s coming in two hours.” And so, when marketing and
I had reviewed this together where we convinced ourselves well, oh, this isn’t a flash sale countdown because we’re just saying, we knew that this sale was
gonna go on for four days, and we have plenty of inventory, so you don’t need to do this
big rush right at midnight. But we hadn’t thought, we hadn’t sufficiently put ourselves in the customer mindset where they’re used to some products going out of stock from time to time. And they’re like, “Oh, there’s
gonna be this big sale. “I wanna get them before they
possibly go out of stock.” So, here’s what this looks like from a Datadog monitoring perspective. This is our add to cart metric, right? So, how often are people
adding stuff to their cart? We see at 10 p.m. when
this email goes out, suddenly a lot of people
are adding stuff to cart, and if I showed the actual checkout rate, it’s funny, ’cause the
checkout rate goes down. People are just adding
stuff to their cart, waiting, leaving it there, knowing that this 20% discount’s
gonna go on at midnight. All right, so I am actually, I have the 6 a.m. Friday
morning on-call shift. I think I’m being a mensch, gonna take the early morning shift. I’m going to sleep, and meanwhile, a bunch of people on the team who were staying up for the
go live at midnight see this, and they’re like, “What,
what is going on?” All right, so, let’s look at, now we’re gonna zoom
ahead to include midnight. 20% promo goes live, and oh my God. There’s that sheer cliff that we knew we wanted to avoid. The site becomes barely usable. Several orders are getting through. We are selling, we’re having more order volume
than we’d ever seen before. But there were also a lot of
people who were getting errors, particularly timeouts. PagerDuty goes off. Myself and many other engineers all hop on this conference call. We’re in the Slack rooms
sending lots of metrics, trying to talk about what is going on, ’cause we were not expecting this. Really quickly, we align
on three different levels to look at the problems. I’m gonna say what the symptoms
are from a business level. The site was sluggish and customers were
getting frequent errors. Fortunately, we were already
using the comms channels that we’d set up as part of our planning. Kudos to us, good planning. At an app level, though, the problems were very high page views, you know, more page views
than we’d planned for, and our checkout rate was well
above what we’d planned for, and there were many timeout errors. Now, in the silverest of linings, the site was broken, but in a way that was just how
we predicted it would break from our capacity testing, so we were actually able to use experience we’d seen in our capacity testing to know what we can do in this case. There was something familiar
about these systems, but we still didn’t understand why there was so much demand. We hadn’t totally connected the dots to how that countdown email had changed customer behavior at midnight. And then from a system level, we were looking at this business level, app level, system level. It’s just our app and database
CPU are pegged at 90%-plus. There was basically like no capacity left to process any more checkouts, et cetera. So, what are the levers that we can pull to make this better? We decided to pull all three
levers at once that we had. We disabled inventory tracking, we scaled up, we added a
bunch more app servers, and we vertically scaled our
database to the biggest one. Now, in retrospect, we could’ve probably just
disabled inventory tracking, and a lot of the extra page views weren’t necessarily extra customers, but really just people
who were frustrated, getting timeouts, and
so you start refreshing, or your page is taking a
while to load, so you refresh, and it’s just that sort of vicious cycle. Our key learning here was
that we should’ve prepared some of these mitigation
scripts ahead of time. We assumed again that we’d
have that gentle curve, and we could say, “Oh, you know, “inventory tracking is
taking a lot of database CPU. “Let’s disable it for one or two SKUs,” and we’d be clicking around
in a web UI to do that. In fact, what we needed to do was en masse disable all
the inventory tracking. And so, it took us a couple minutes to just write this script because we didn’t wanna click
through our scores of SKUs. This added several minutes
to the remediation. Another key learning was that when you try to scale
a Postgres RDS database that’s under high load, it takes longer than when
it’s under low loads. In our case, it took about 20 minutes to go from when it started
being unavailable to restart until it actually came back online. This was really surprising. There was a really dark
moment where we were like, “Boy, do we promote one of
our replicas to the leader, “or do we wait for this
thing, is it gonna work? “It’s only ever taken
five minutes in the past!” I think there was some stuff
around transactions timeouts that we needed to set,
configure in Postgres that would allow this to
be faster in the future. Here’s a graph of our orders per day from midnight ’til 1 a.m. You can see that there was this
really high spike early on, and that spike probably would’ve
been faster if the site, if we’d had the sufficient capacity. It’s hard to say how high
that really would’ve been. But in order to understand the impact of what we lost here, it’s really where this crying face is. Everything in that area is where there was a
bad customer experience that we’ve aimed to do
better in the future. Now, in retrospect, had we just disabled inventory tracking before this went live, we’re pretty confident,
but we don’t know for sure, I would say it’s likely
that we could’ve just, it would’ve been smooth
sailing all the way through. One other thing that we had
to do was fix broken orders. We lacked atomicity on some
of our checkout processes, we’ve realized, and so there were a
bunch of orders between, placed in that hour that were
in this inconsistent state, maybe missing collateral, or we hadn’t collected the payment even though we sent a confirmation email. Here we got on a conference call with our customer experience team, and we workshopped what
the customer comms are. We were writing a script to fix things. This was a really effective collaboration that let us go from this
unfortunate customer experience to something that we ended
up communicating really well and handling really well. We fixed all the callbacks. And so then, here’s the
rest of Black Friday. This is what we were expecting
with that nice, gentle hill. Oh, my gosh, that is so nice. This was interesting because, you know, I go to bed late at night and then I wake up in the morning, and I don’t know if I’m
in this new world where, oh my gosh, are all our
expectations different? Midnight was so different. It’s gonna be a wild ride
the rest of the time. No, this was crazy accurate what our data and marketing
team had forecast. Our peak traffic was within
10% of what they’d predicted. I mean, like, if engineers give you an
estimate that’s within 10%, I’m like, what is this magical
superpower that you have? (audience laughing) That was really impressive. And overall, we exceeded our
revenue targets for the day despite the problems at midnight. So overall, it’s like, you know, success with an asterisk, right? Mrs. Lincoln, besides that, how’d you like the rest of the play? (audience laughing) (Aaron laughing) So, right, there’s obviously
room for improvement, but we beat, exceeded
our revenue expectations. The rest of the weekend was delightfully boring and predictable from a site reliability perspective, and, boy, that midnight
thing was unexpected. So, how did we turn that
into learnings for next year? We do our blameless learning review, notes to ourselves for next year. Boy, midnight was a surprise! Let’s better understand
customer experience when we’re making one of
these flash sale countdowns. I wanna call out this
anti-pattern of wanting to say, boy, well, you know, we padded our expectations a little bit. Do we just need to pad
them more in the future? I wanna just call that
out as an anti-strategy. You can’t just say our
estimates were wrong, let’s just pad them more. That’s not bringing any new
information to the estimate, and really, you wanna
understand what you missed in, what you weren’t
capturing in your estimate and then look to make
architectural improvements that would dramatically
improve our capacity for subsequent years. You know, we thought through
what the user behavior was that was different, some of the operator behavior of having that script ready
to go for disabling inventory or just disabling inventory
tracking preemptively, and prioritized a more resilient
architecture for next year. So, our 2019 tech roadmap
includes these things, like dramatically improving the
reliability and performance. One of the big projects
here is pre-generated pages for the homepage, PDP, PLP. That just makes that app CPU backend work, sorta ups, obviate it, like it would just all
be cacheable on a CDN, which is very scalable. We’re also moving to an
asynchronous checkout flow where we’re optimistically taking orders with minimal validations, knowing that we can fix whatever’s wrong in arrears or retroactively to say, “Oh, you know, “that payment method that
you’d used previously “didn’t work this time. “Please log in and update
your payment method,” or something like that, but we don’t need to
do all the validations and all the inventory accounting
before taking your order. Now, I wouldn’t wanna
say we’re just doing this so that Black Friday is easier. We’re doing this to also
drive important business goals around conversion and retention. Having a fast, reliable experience is, improves conversion and your customer lifetime value. That’s why it’s an easy organization sell to say these are projects
that we should invest in, because it’s gonna drive these
important business metrics. Throughout the team, we’ve deepened our debugging and systems thinking expertise. Our capacity planning and
testing has been super useful, and we now do that
before any major launch, and it’s given us a lot more confidence. And so, as a moment of good news, I will say in March we
launched a new brand called Glossier Play, and we basically took all these learnings and we applied them. We did the capacity testing. We preemptively disabled
inventory tracking. We were able to look at optimizations that we made to our checkout flow and had measured our capacity three times beyond what our peak was at that midnight, Black Friday, and our Glossier Play launch, from a tech reliability perspective, was delightfully boring. All right, that’s all I have. Thank you everyone for
your kind attention. I have samples that I can
give out after the talk, or come find me for a break. Thanks for your kind attention. (audience clapping) (upbeat music)

Leave a Reply

Your email address will not be published. Required fields are marked *

Releated