Excerpts - Working Backwards

March 12, 2022

Culture

“We have an unshakeable conviction that the long-term interests of shareowners are perfectly aligned with the interests of customers.”2 In other words, while it’s true that shareholder value stems from growth in profit, Amazon believes that long-term growth is best produced by putting the customer first.

Our culture is four things:

Of course, these four cultural touchstones don’t quite get at the “how,” that is, how people can work, individually and collectively, to ensure that they are maintained. And so Jeff and his leadership team crafted a set of 14 Leadership Principles, as well as a broad set of explicit, practical methodologies, that constantly reinforce its cultural goals. These include: the Bar Raiser hiring process that ensures that the company continues to acquire top talent; a bias for separable teams run by leaders with a singular focus that optimizes for speed of delivery and innovation; the use of written narratives instead of slide decks to ensure that deep understanding of complex issues drives well-informed decisions; a relentless focus on input metrics to ensure that teams work on activities that propel the business. And finally there is the product development process that gives this book its name: working backwards from the desired customer experience.


The Amazon Leadership Principles

The Amazon Leadership Principles are the main focus of chapter one. In the very early days of the company, when it consisted of a handful of people working out of three small rooms, there were no formal leadership principles because, in a sense, Jeff was the leadership principles. He wrote the job descriptions, interviewed candidates, packed and shipped boxes, and read every email that went out to customers. Taking part in every aspect of the business allowed him to communicate the Amazon philosophy informally to the relatively small group of employees.

we created the Bar Raiser because the company was growing extremely fast. One of the major pitfalls of needing to hire a lot of new people very quickly is urgency bias: the tendency to overlook a candidate’s flaws because you are overwhelmed with work and need bodies. The Bar Raiser provides teams with methods to make the strongest hires efficiently and quickly, but without cutting corners.

In a company known for its inventiveness, separable, single-threaded leadership has been one of Amazon’s most useful inventions. We discuss it in chapter three. This is the organizational strategy that minimizes the drag on efficiency created by intra-organizational dependencies. The basic premise is, for each initiative or project, there is a single leader whose focus is that project and that project alone, and that leader oversees teams of people whose attention is similarly focused on that one project.


The customer is also at the center of how we analyze and manage performance metrics. Our emphasis is on what we call controllable input metrics, rather than output metrics. Controllable input metrics (e.g., reducing internal costs so you can affordably lower product prices, adding new items for sale on the website, or reducing standard delivery time) measure the set of activities that, if done well, will yield the desired results, or output metrics (such as monthly revenue and stock price). We detail these metrics as well as how to discover and track them in chapter


He identified several reasons why the book category was underserved and well suited to online commerce. He outlined how he could create a new and compelling experience for book-buying customers. To begin with, books were relatively lightweight and came in fairly uniform sizes, meaning they would be easy and inexpensive to warehouse, pack, and ship. Second, while more than 100 million books had been written and more than a million titles were in print in 1994, even a Barnes & Noble mega-bookstore could stock only tens of thousands of titles. An online bookstore, on the other hand, could offer not just the books that could fit in a brick-and-mortar store but any book in print. Third, there were two large book-distribution companies, Ingram and Baker & Taylor, that acted as intermediaries between publishers and retailers and maintained huge inventories in vast warehouses. They kept detailed electronic catalogs of books in print to make it easy for bookstores and libraries to order from them. Jeff realized that he could combine the infrastructure that Ingram and Baker & Taylor had created—warehouses full of books ready to be shipped, plus an electronic catalog of those books—with the growing infrastructure of the Web, making it possible for consumers to find and buy any book in print and get it shipped directly to their homes. Finally, the site could use technology to analyze the behavior of customers and create a unique, personalized experience for each one of them.


From the tone of customer emails to the condition of the books and their packaging, Jeff had one simple rule: “It has to be perfect.” He’d remind his team that one bad customer experience would undo the goodwill of hundreds of perfect ones.

Another of Jeff’s frequent exhortations to his small staff was that Amazon should always underpromise and overdeliver, to ensure that customer expectations were exceeded. One example of this principle was that the website clearly described standard shipping as U.S. Postal Service First-Class Mail. In actuality, all these shipments were sent by Priority Mail—a far more expensive option that guaranteed delivery within two to three business days anywhere in the United States. This was called out as a complimentary upgrade in the shipment-confirmation email.


“You must have experience designing and building large and complex (yet maintainable) systems, and you should be able to do so in about one-third the time that most competent people think possible.”


Amazon’s Leadership Principles:

Customer Obsession. Leaders start with the customer and work backwards. They work vigorously to earn and keep customer trust. Although leaders pay attention to competitors, they obsess over customers. Ownership. Leaders are owners. They think long term and don’t sacrifice long-term value for short-term results. They act on behalf of the entire company, beyond just their own team. They never say, “that’s not my job.” Invent and Simplify. Leaders expect and require innovation and invention from their teams and always find ways to simplify. They are externally aware, look for new ideas from everywhere, and are not limited by “not invented here.” As we do new things, we accept that we may be misunderstood for long periods of time. Are Right, A Lot. Leaders are right a lot. They have strong judgment and good instincts. They seek diverse perspectives and work to disconfirm their beliefs. Learn and Be Curious. Leaders are never done learning and always seek to improve themselves. They are curious about new possibilities and act to explore them. Hire and Develop the Best. Leaders raise the performance bar with every hire and promotion. They recognize exceptional talent, and willingly move them throughout the organization. Leaders develop leaders and take seriously their role in coaching others. We work on behalf of our people to invent mechanisms for development like Career Choice. Insist on the Highest Standards. Leaders have relentlessly high standards—many people may think these standards are unreasonably high. Leaders are continually raising the bar and drive their teams to deliver high-quality products, services, and processes. Leaders ensure that defects do not get sent down the line and that problems are fixed so they stay fixed. Think Big. Thinking small is a self-fulfilling prophecy. Leaders create and communicate a bold direction that inspires results. They think differently and look around corners for ways to serve customers. Bias for Action. Speed matters in business. Many decisions and actions are reversible and do not need extensive study. We value calculated risk-taking. Frugality. Accomplish more with less. Constraints breed resourcefulness, self-sufficiency, and invention. There are no extra points for growing headcount, budget size, or fixed expense. Earn Trust. Leaders listen attentively, speak candidly, and treat others respectfully. They are vocally self-critical, even when doing so is awkward or embarrassing. Leaders do not believe their or their team’s body odor smells of perfume. They benchmark themselves and their teams against the best. Dive Deep. Leaders operate at all levels, stay connected to the details, audit frequently, and are skeptical when metrics and anecdotes differ. No task is beneath them. Have Backbone; Disagree and Commit. Leaders are obligated to respectfully challenge decisions when they disagree, even when doing so is uncomfortable or exhausting. Leaders have conviction and are tenacious. They do not compromise for the sake of social cohesion. Once a decision is determined, they commit wholly. Deliver Results. Leaders focus on the key inputs for their business and deliver them with the right quality and in a timely fashion. Despite setbacks, they rise to the occasion and never settle.


Mechanisms: Reinforcing the Leadership Principles

“Good intentions don’t work. Mechanisms do.” No company can rely on good intentions like “We must try harder!” or “Next time remember to…” to improve a process, solve a problem, or fix a mistake. That’s because people already had good intentions when the problems cropped up in the first place. Amazon realized early on that if you don’t change the underlying condition that created a problem, you should expect the problem to recur.

Three foundational mechanisms are:

  1. the annual planning process
  2. the S-Team goals process (the S-Team consists of the senior vice presidents and direct reports to Jeff Bezos)
  3. Amazon’s compensation plan, which aligns incentives with what’s best for customers and the company over the long term.

Annual Planning: OP1 and OP2 Amazon relies heavily on autonomous, single-threaded teams (more in chapter three). These teams keep the company nimble, moving quickly with a minimum of external friction, but their autonomy must be paired with precise goal-setting to align each team’s independent plans with the company’s overarching goals. Amazon’s planning for the calendar year begins in the summer. It’s a painstaking process that requires four to eight weeks of intensive work for the managers and many staff members of every team in the company. This intensity is deliberate, because a poorly defined plan—or worse, no plan at all—can incur a much greater downstream cost. The S-Team begins by creating a set of high-level expectations or objectives for the entire company. For example, in previous years, the CEO and CFO would articulate goals like “Grow revenue from $10 billion to $15 billion” or “Reduce fixed costs by 5 percent.” Over time, Amazon refined such broad goals into a longer list of increasingly detailed objectives. Examples have included: revenue growth targets by geography and business segment; operating leverage targets; improving productivity and giving back those savings to customers in the form of lower prices; generating strong free cash flow; and understanding the level of investment in new businesses, products, and services. Once these high-level expectations are established, each group begins work on its own more granular operating plan—known as OP1—which sets out the individual group’s “bottom-up” proposal. Through the narrative process (described in chapter four), Amazon aims to evaluate about ten times as much information as the typical company does in a similar time frame. The main components of an OP1 narrative are: Assessment of past performance, including goals achieved, goals missed, and lessons learned Key initiatives for the following year A detailed income statement Requests (and justifications) for resources, which may include things like new hires, marketing spend, equipment, and other fixed assets Each group works in partnership with its finance and human resources counterparts to create their detailed plan, which is then presented to a panel of leaders. The level of those leaders—director, VP, or S-Team—depends on the size, impact, or strategic importance of the group. The panel then reconciles any gaps between the bottom-up proposal and the top-down goals the group has been asked to meet. Sometimes a team may be asked to rework its plan and re-present it until there’s agreement between the top-down goals and bottom-up plan. The OP1 process runs through the fall and is completed before the fourth-quarter holiday rush begins. In January, after the holiday season ends, OP1 is adjusted as necessary to reflect the fourth-quarter results, as well as to update the trajectory of the business. This shorter process is called OP2, and it generates the plan of record for the calendar year. OP2 aligns each group with the goals of the company. Everybody knows their overall objectives, including targets for revenue, cost, and performance. The metrics are agreed upon and will be supplied as part of every team’s deliverables. OP2 makes it crystal clear what each group has committed to do, how they intend to achieve those goals, and what resources they need to get the work done. Some variances are inevitable, but any change to OP2 requires formal S-Team approval. S-Team Goals During OP1, as the S-Team reads and reviews the various operating plans, they select the initiatives and goals from each team that they consider to be the most important to achieve. These selected goals are called, unsurprisingly, S-Team goals. In other words, my (Bill’s) team working on Amazon Music might have had 23 goals and initiatives in our 2012 operating plan. After reviewing our plan with us, the S-Team might have chosen six of the 23 to become S-Team goals. The music team would still have worked to achieve all 23 goals, but it would be sure to make resource allocation decisions throughout the year to prioritize the six S-Team goals ahead of the remaining 17. Three notably Amazonian features of S-Team goals are their unusually large number, their level of detail, and their aggressiveness. S-Team goals once numbered in the dozens, but these have expanded to many hundreds every year, scattered across the entire company. S-Team goals are mainly input-focused metrics that measure the specific activities teams need to perform during the year that, if achieved, will yield the desired business results. In chapter six, we will discuss in more detail how Amazon develops such precise and specific metrics to ensure teams meet their business objectives. S-Team goals must be Specific, Measurable, Attainable, Relevant, and Timely (SMART). An actual S-Team goal could be as specific as “Add 500 new products in the amazon.fr Musical Instruments category (100 products in Q1, 200 in Q2…),” or “Ensure 99.99 percent of all calls to software service ‘Y’ are successfully responded to within 10 milliseconds,” or “Increase repeat advertisers from 50 percent to 75 percent by Q3 of next year.” S-Team goals are aggressive enough that Amazon only expects about three-quarters of them to be fully achieved during the year. Hitting every one of them would be a clear sign that the bar had been set too low. S-Team goals for the entire company are aggregated and their metrics are tracked with centralized tools by the finance team. Each undergoes an intensive quarterly review that calls for thorough preparation. Reviews are conducted in multihour S-Team meetings scheduled on a rolling basis over the quarter rather than all at once. At many companies, when the senior leadership meets, they tend to focus more on big-picture, high-level strategy issues than on execution. At Amazon, it’s the opposite. Amazon leaders toil over the execution details and regularly embody the Dive Deep leadership principle, which states: “Leaders operate at all levels, stay connected to the details, audit frequently, and are skeptical when metrics and anecdotes differ. No task is beneath them.” The finance team tracks the S-Team goals throughout the year with a status of green, yellow, and red. Green means you are on track, yellow means there is some risk of missing the goal, and red means you are not likely to hit the goal unless something meaningful changes. During the periodic reviews, yellow or red status draws the team’s attention where it’s needed most, and a frank discussion about what’s wrong and how it will be addressed ensues. The OP planning process aligns the entire company on what’s truly important to accomplish for the year. S-Team goals refine that alignment by giving top priority to the company’s biggest or most pressing objectives. The review cadence helps maintain alignment, no matter what happens along the way. This structure ensures that every goal that’s important to the company has someone—an accountable owner—working on it. Last, as Amazon has grown, the planning process has evolved with it. While the overall structure remains the same, there are now separate leadership teams for the retail business and AWS—and even separate teams for the large businesses within those parts of the company. Each of these parts of the company has its own version of “S-Team goals,” just with a different label. As your organization grows, you can follow this recursive process too. Amazon Compensation Reinforces Long-Term Thinking Even the very best of all these preparations can still be subverted by other factors—the most insidious of which is a certain type of “performance-based” executive compensation that’s all too common elsewhere. No matter how clear your leadership principles and yearly plan may be, they speak softly in comparison to financial incentives. Money talks—if your leadership principles, your yearly plan, and your financial incentives are not closely aligned, you won’t get the right results. Amazon believes that the “performance” in performance-based compensation must refer to the company’s overall performance, that is, the best interests of shareholders, which in turn are perfectly aligned with the best interests of customers. Accordingly, the compensation of Amazon S-Team members and all senior leaders is heavily weighted toward equity earned over a period of several years. The maximum salary itself is set well below that of industry peers in the United States. When we were there, the maximum base salary for any employee was $160,000 (indications are that this remains true). Some new executive hires may receive a signing bonus, but the bulk of their compensation—and the potentially enormous upside—is the long-term value of the company. The wrong kind of compensation practice can cause misalignment in two ways: (1) by rewarding short-term goals at the expense of long-term value creation, and (2) by rewarding the achievement of localized departmental milestones whether or not they benefit the company as a whole. Both can powerfully drive behaviors that are antithetical to the company’s ultimate goals.

In other industries, such as media and financial services, a large percentage of executive compensation is doled out in annual performance bonuses. These short-term goals (and yes, a year is definitely short term) can generate behaviors that are detrimental to creating long-term value.


Amazon uses a similar long-term equity structure in order to prevent potential conflicts of interest in its wholly owned subsidiaries, including IMDb, Zappos, and Twitch. Executives in those companies are compensated in the same manner as other Amazon executives, primarily with a base salary and a heavy emphasis on Amazon equity, which encourages collaboration.

Executives in those companies are compensated in the same manner as other Amazon executives, primarily with a base salary and a heavy emphasis on Amazon equity, which encourages collaboration.

Jeff often said in those days, “We want missionaries, not mercenaries.

Missionaries, as Jeff defined the term, would not only believe in Amazon’s mission but also embody its Leadership Principles. They would also stick around: we wanted people who would thrive and work at Amazon for five-plus years, not the 18–24 months typical of Silicon Valley.


Hiring

Even the smartest interviewer can wander off script and ask questions that lack a clear objective, leading to answers that reveal nothing about a candidate’s likely job performance.

Unstructured hiring decision meetings can give rise to groupthink, confirmation bias, and other cognitive traps that feel right at the time but produce poor decisions.

Several significant flaws appear in this hiring process. First, the fact that the team members shared their thoughts after each interview increased the likelihood that subsequent interviewers would be biased. And Carson’s failure to immediately write up his assessment meant that the group was deprived of the wisdom of its most experienced and insightful team member. Carson’s behavior—uncharacteristic for him—was just one result of the urgency bias that affected the whole process. With a key position glaringly open, and a critical employee doing double duty to cover, the whole team felt time pressure that compelled them to accentuate the positive and to overlook some shortcomings in the process. One of these was the quality of the written evaluations. Brandon’s evaluation, for example, showed that his interview questions had lacked specificity and purpose. He commented that Joe “has a solid background owning and driving strategy” but did not provide any detailed, credible examples of what Joe actually had accomplished in that regard. How could the group tell if his past experience would portend that he would be a high-performing Green Corp. employee? The group had also succumbed to some serious confirmation bias—the tendency for people to focus on the positive elements that others identify and ignore the negatives and contradictory signals. At every handoff during the loop, the interviewers had engaged in conversation in the team room. The positive comments from the interviewer who had just completed the meeting with Joe influenced the next interviewer to also look for those positive characteristics and to emphasize them in their evaluation. The feedback meeting itself had been relatively unstructured, which had given rise to groupthink among a team that valued each other’s approval and wanted to help solve the problem by making a hire.

Every bad hiring decision comes at a cost. In the best cases, it quickly becomes apparent that the new hire is not a good fit, and the person leaves shortly after joining. Even then, the short-term cost can be substantial: the position may go unstaffed for longer than you’d like, the interview team will have wasted their time, and good candidates may have been turned away in the interim. In the worst case, a bad hire stays with the company while making errors in judgment that bring a host of possible bad outcomes.


The Effects of Personal Bias and Hiring Urgency

There are other types of cognitive biases that affect the hiring process. Another harmful one is personal bias, the basic human instinct to surround yourself with people who are like you.

that such superficial similarities typically have nothing to do with performance, and (2) hiring for them tends to make for un-diverse workforces with a narrower field of vision.

The problems with this approach are obviously (1) that such superficial similarities typically have nothing to do with performance, and (2) hiring for them tends to make for un-diverse workforces with a narrower field of vision.

According to Sequoia Capital, the average startup in Silicon Valley spends 990 hours to hire 12 software engineers!1 That’s more than 80 hours per hire, and all that time taken away from a team that’s already understaffed and working on deadline only adds to the urgency to staff up.


Another force that works against successful hiring is the lack of a formal process and training.

“Organizational culture comes about in one of two ways. It’s either decisively defined, nurtured and protected from the inception of the organization; or—more typically—it comes about haphazardly as a collective sum of the beliefs, experiences and behaviors of those on the team. Either way, you will have a culture. For better or worse.”

Brent Gleeson, a leadership coach and Navy SEAL combat veteran, writes, “Organizational culture comes about in one of two ways. It’s either decisively defined, nurtured and protected from the inception of the organization; or—more typically—it comes about haphazardly as a collective sum of the beliefs, experiences and behaviors of those on the team. Either way, you will have a culture. For better or worse.”

In a period of torrid headcount growth, founders and early employees often feel that they’re losing control of the company—it has become something different than what they set out to create. Looking back, they realize that the root cause of the problem can be traced to an ill-defined or absent hiring process. They were hiring scores of people who would change the company culture rather than those who would embody, reinforce, and add to it.

The Bar Raiser Solution The Amazon Bar Raiser program has the goal of creating a scalable, repeatable, formal process for consistently making appropriate and successful hiring decisions. Like all good processes, it’s simple to understand, can be easily taught to new people, does not depend on scarce resources (such as a single individual), and has a feedback loop to ensure continual improvement.

The name was intended to signal to everyone involved in the hiring process that every new hire should “raise the bar,” that is, be better in one important way (or more) than the other members of the team they join. The theory held that by raising the bar with each new hire, the team would get progressively stronger and produce increasingly powerful results.


Bar Raiser Hiring Process

There are eight steps in the Bar Raiser hiring process: Job Description Résumé Review Phone Screen In-House Interview Written Feedback Debrief/Hiring Meeting Reference Check Offer Through Onboarding

Job Description

At Amazon, it is the hiring manager’s responsibility to write the description, which the Bar Raiser can review for clarity.

the hiring manager (or their designate in the case of technical roles) conducts a one-hour phone interview with each person. During the phone screen, the hiring manager describes the position to the candidate in detail and seeks to establish some rapport with them by describing their own background, and why they chose to join Amazon. Roughly 45 minutes of that hour should consist of the manager questioning the candidate and following up where necessary

are designed to solicit examples of the candidate’s past behavior (“Tell me about a time when you…”) and focus on a subset of the Amazon Leadership Principles.

After this detailed phone screen, the hiring manager decides whether they are inclined to hire the candidate based on the data they’ve collected so far. If so, then the candidate will be invited for an in-house interview.

In most cases, the questionable candidate will not get the job, and a lot of time will have been wasted in the process. The hiring manager should not bring a candidate in for the time-consuming and expensive interview loop unless they are inclined to hire them after the phone interview.

In-House Interview Loop The in-house interview loop takes five to seven hours to complete and requires the participation of several people who undoubtedly have many other responsibilities and tasks on their plate, so this step must be carefully planned, prepared, and executed.

Typically, the most effective loops consist of five to seven interviewers. The company has found that the returns on having more people than that involved tend to diminish, and that when there are fewer people involved, there are often gaps in knowledge about the candidate

important qualifications for the loop participants. First, everyone must have been properly trained in the company’s interviewing process

After training, the interviewer is required to pair up with an experienced senior interviewer to jointly conduct at least one real interview before they do one on their own.

Second, no loop participant should be more than one level below the level of the position the candidate will hold. Nor should there be an interviewer who would become a direct report of the candidate.

It’s uncomfortable for the candidate during the interview, and the direct report will learn about the candidate’s weaknesses, and other employees’ views of those weaknesses, during the debrief—which could lead to problems for the future functioning of the team. Also, nothing good happens if a future direct report is not inclined to hire the candidate and you hire that person anyway.

There are two distinctive features in an Amazon in-house interview loop: behavioral interviewing and Bar Raiser. 1. Behavioral Interviewing

Eventually the most important goals of the interview process became clear: to assess how well a candidate’s past behavior and ways of working map to the Amazon Leadership Principles.

This involves assigning one or more of the 14 Leadership Principles to each member of the interview panel, who in turn poses questions that map to their assigned leadership principle, seeking to elicit two kinds of data. First, the interviewer wants the candidate to provide detailed examples of what they personally contributed to solving hard problems or how they performed in work situations like the ones they will experience at Amazon. Second, the interviewer wants to learn how the candidate accomplished their goals and whether their methods align with the Amazon Leadership Principles.


Organizing Separable, Single-Threaded Leadership

Why coordination increases and productivity decreases

We are often asked how Amazon has managed to buck that trend by innovating so rapidly, especially across so many businesses—online retail, cloud computing, digital goods, devices, cashierless stores,

The answer lies in an Amazon innovation called “single-threaded leadership,” in which a single person, unencumbered by competing responsibilities, owns a single major initiative and heads up a separable, largely autonomous team to deliver its goals.


our explosive growth was slowing down our pace of innovation. We were spending more time coordinating and less time building.

Each overlap created one kind of dependency, which describes something one team needs but can’t supply for itself.

Managing dependencies requires coordination—two or more people sitting down to hash out a solution—and coordination takes time. As Amazon grew, we realized that despite our best efforts, we were spending too much time coordinating and not enough time building. That’s because, while the growth in employees was linear, the number of their possible lines of communication grew exponentially.

The variations in technical dependencies are endless, but each one binds teams more tightly together, turning a rapid sprint into a stumbling sack race where only the most coordinated will cross the finish line. When a software architecture includes a large number of technical dependencies, it is said to be tightly coupled, a bad thing that frustrates all involved when you are trying to double and triple the size of the software team.

Organizational Dependencies Our organizational chart created extra work in a similar fashion, forcing teams to slog through layers of people to secure project approval, prioritization, and allocation of shared resources that were required to deliver a project.

When the company was smaller, you could enlist help or check for possible conflicts by just asking around—everyone often knew each other fairly well. At scale, the same task became long and laborious. You’d have to figure out who you needed to talk to,

Better Coordination Was the Wrong Answer Resolving a dependency usually requires coordination and communication. And when your dependencies keep growing, requiring more and more coordination, it’s only natural to try speeding things up by improving your communication. There are countless approaches to managing cross-team coordination, ranging from formalized practices to hiring dedicated coordinators—and it seemed as though we looked at them all. At last we realized that all this cross-team communication didn’t really need refinement at all—it needed elimination.


This gave rise to a process called New Project Initiatives (NPI), whose job was global prioritization...

decide which ones were worthy of doing immediately and which ones could wait. Such global prioritization proved to be very hard indeed. Which is more important, launching a cost-saving project for fulfillment centers, adding a feature that might boost sales in the apparel category, or cleaning up old code we cannot do without to extend its practical life?

Here’s how NPI worked: Once every quarter, teams submitted projects they thought were worth doing that would require resources from outside their own team—which basically meant almost every project of reasonable size. It took quite a bit of work to prepare and submit an NPI request. You needed a “one-pager”; a written summary of the idea; an initial rough estimate of which teams would be impacted; a consumer adoption model, if applicable; a P< and an explanation of why it was strategically important for Amazon to embark on the initiative immediately. Just proposing the idea represented a resource-intensive undertaking. A small group would screen all the NPI submissions. A project could be cut in the first round if it wasn’t thoroughly explained, didn’t address a core company goal, didn’t represent an acceptable cost/benefit ratio, or obviously wouldn’t make the cut. The more promising ideas would move to the next round for a more detailed technical and financial scoping exercise. This step typically happened in real time in a conference room where a leader from each major area could review the project submission, ask any clarifying questions, and provide an estimate on how many resources from their area would be required to complete the project as stated. Usually 30 or 40 attendees were on hand to review a full list of projects, which made for long, long meetings—yuck. Afterward, the smaller NPI core group would true up the resource and payback estimates, then decide which projects would actually go forward. After that group met, every project team leader would receive an email about their submission that came in one of three forms. From best to worst they were:

In an effort to improve our assumptions, we established a feedback loop to measure how well a team’s estimates matched its eventual results, adding another layer of accountability. Jeff Wilke stashed away paper copies of approved NPI proposals so he could check the predictions against actual results later.

A year or more could pass between the first presentation and measurable results, which is a long time to wait in order to learn what adjustments are needed.

Amazon ultimately invented its way around the problem by cutting off dependencies at the source.


First Proposed Solution: Two-Pizza Team Seeing that our best short-term solutions would not be enough, Jeff proposed that instead of finding new and better ways to manage our dependencies, we figure out how to remove them. We could do this, he said, by reorganizing software engineers into smaller teams that would be essentially autonomous, connected to other teams only loosely, and only when unavoidable.

A two-pizza team will: Be small. No more than ten people. Be autonomous. They should have no need to coordinate with other teams to get their work done. With the new service-based software architecture in place, any team could simply refer to the published application programming interfaces (APIs) for other teams. (More on this new software architecture to follow.) Be evaluated by a well-defined “fitness function.” This is the sum of a weighted series of metrics. Example: a team that is in charge of adding selection in a product category might be evaluated on: a) how many new distinct items were added for the period (50 percent weighting) b) how many units of those new distinct items were sold (30 percent weighting) c) how many page views those distinct items received (20 percent weighting) Be monitored in real time. A team’s real-time score on its fitness function would be displayed on a dashboard next to all the other two-pizza teams’ scores. Be the business owner. The team will own and be responsible for all aspects of its area of focus, including design, technology, and business results. This paradigm shift eliminates the all-too-often heard excuses such as, “We built what the business folks asked us to, they just asked for the wrong product,” or “If the tech team had actually delivered what we asked for and did it on time, we would have hit our numbers.” Be led by a multidisciplined top-flight leader. The leader must have deep technical expertise, know how to hire world-class software engineers and product managers, and possess excellent business judgment. Be self-funding. The team’s work will pay for itself. Be approved in advance by the S-Team. The S-Team must approve the formation of every two-pizza team.


If multiple teams have direct access to a shared block of software code or some part of a database, they slow each other down. Whether they’re allowed to change the way the code works, change how the data are organized, or merely build something that uses the shared code or data, everybody is at risk if anybody makes a change. Managing that risk requires a lot of time spent in coordination. The solution is to encapsulate, that is, assign ownership of a given block of code or part of a database to one team. Anyone else who wants something from that walled-off area must make a well-documented service request via an API.4 Think of it like a restaurant. If you are hungry, you don’t walk into the kitchen and fix what you want. You ask for a menu, then choose an item from it. If you want something that is not on that menu, you can ask the waiter, who will send a request to the cook. But there is no guarantee you’ll get it. What happens inside the walled-off area in question is completely up to the single team that owns it, so long as they don’t change how information can be exchanged. If change becomes necessary, the owners publish a revised set of rules—a new menu, if you will—and all those who rely on them are notified.

Today the advantages of a microservices-based architecture are well understood, and the approach has been adopted by many tech companies. The benefits include improved agility, developer productivity, scalability, and a better ability to resolve and recover from outages and failures. In addition, with microservices, it becomes possible to establish small, autonomous teams that can assume a level of ownership of their code that isn’t possible with a monolithic approach.

The team had a well-defined purpose. For example, the team intends to answer the question, “How much inventory should Amazon buy of a given product and when should we buy it?” The boundaries of ownership were well understood. For example, the team asks the Forecasting team what the demand will be for a particular product at a given time, and then uses their answer as an input to make a buying decision. The metrics used to measure progress were agreed upon. For example, In-stock Product Pages Displayed divided by Total Product Pages Displayed, weighted at 60 percent; and Inventory Holding Cost, weighted at 40 percent.

each team started out with its own share of dependencies that would hold them back until eliminated, and eliminating the dependencies was hard work with little to no immediate payback. The most successful teams invested much of their early time in removing dependencies and building “instrumentation”—our term for infrastructure used to measure every important action—before they began to innovate, meaning, add new features.

“most decisions should probably be made with somewhere around 70% of the information you wish you had. If you wait for 90%, in most cases, you’re probably being slow. Plus, either way, you need to be good at quickly recognizing and correcting bad decisions. If you’re good at course correcting, being wrong may be less costly than you think, whereas being slow is going to be expensive for sure.”

dependencies could arise in the form of cross-functional projects or top-down initiatives that spanned multiple teams. For example, a two-pizza team working on picking algorithms for the fulfillment centers might also be called upon to add support for robotics being implemented to move products around the warehouse. We found it helpful to think of such cross-functional projects as a kind of tax, a payment one team had to make in support of the overall forward progress of the company. We tried to minimize such intrusions but could not avoid them altogether.

We found it helpful to think of such cross-functional projects as a kind of tax, a payment one team had to make in support of the overall forward progress of the company. We tried to minimize such intrusions but could not avoid them altogether.

Two-Pizza Teams Worked Best in Product Development We weren’t sure how far to take the two-pizza team concept, and at the beginning it was planned solely as a reorganization of product development. Seeing its early success in speeding up innovation, we wondered whether it might also work in retail, legal, HR, and other areas. The answer turned out to be no, because those areas did not suffer from the tangled dependencies that had hampered Amazon product development.


Fitness Functions Were Actually Worse Than Their Component Metrics

First, teams spent an inordinate amount of time struggling with how to construct the most meaningful fitness function. Should the formula be 50 percent for Metric A plus 30 percent for Metric B plus 20 percent for Metric C? Or should it be 45 percent for Metric A plus 40 percent for Metric B plus 15 percent for Metric C? You can imagine how easy it was to get lost in those debates.

Second, some of these overly complicated functions combined seven or more metrics, a few of which were composite numbers built from their own submetrics. When graphed over time, they might describe a trend line that went up and to the right, but what did that mean? It was often impossible to discern what the team was doing right (or wrong) and how they should respond to the trend. Also, the relative weightings could change over time as business conditions changed, obscuring historic trends altogether.

We eventually reverted to relying directly on the underlying metrics instead of the fitness function.

Great Two-Pizza Team Leaders Proved to Be Rarities

Great Two-Pizza Team Leaders Proved to Be Rarities The original idea was to create a large number of small teams, each under a solid, multidisciplined, frontline manager and arranged collectively into a traditional, hierarchical org chart. The manager would be comfortable mentoring and diving deep in areas ranging from technical challenges to financial modeling and business performance. Although we did identify a few such brilliant managers, they turned out to be notoriously difficult to find in sufficient numbers, even at Amazon. This greatly limited the number of two-pizza teams we could effectively deploy, unless we relaxed the constraint of forcing teams to have direct-line reporting to such rare leaders.

Typically an executive, assigned to drive some innovation or initiative, would turn to one of his reports—possibly a director or senior manager—who might have responsibility for five of the executive’s 26 total initiatives. The executive would ask the director to identify one of those direct reports—let’s say a project manager—who would add the project to their to-do list. The PM, in turn, would prevail upon an engineering director to see if one of their dev teams could squeeze the work into their dev schedule. Amazon’s SVP of Devices, Dave Limp, summed up nicely what might happen next: “The best way to fail at inventing something is by making it somebody’s part-time job.”


FBA launched in September 2006 and became a huge success. Third-party sellers loved it because, by offering them warehouse space for their products, Amazon turned warehousing into a variable cost for them instead of a fixed cost. FBA also enabled third-party sellers to reap the benefits of participating in Prime, which in turn improved the customer experience for buyers.


The other crucial component of the STL model is a separable, single-threaded team being run by a single-threaded leader like Tom. As Jeff Wilke explains, “Separable means almost as separable organizationally as APIs are for software. Single-threaded means they don’t work on anything else.”

Memos

Amazon made the transition from the use of PowerPoint (or any other presentation software) to written narratives, and how it has benefited the company—and can benefit yours too. Amazon uses two main forms of narrative. The first is known as the “six-pager.” It is used to describe, review, or propose just about any type of idea, process, or business. The second narrative form is the PR/FAQ. This one is specifically linked to the Working Backwards process for new product development.


“The Cognitive Style of PowerPoint: Pitching Out Corrupts Within,” by Edward Tufte, a Yale professor who is an authority on the visualization of information.


“As analysis becomes more causal, multivariate, comparative, evidence based, and resolution-intense,” he writes, “the more damaging the bullet list becomes.” That description fit our discussions at the S-Team meetings: complex, interconnected, requiring plenty of information to explore, with greater and greater consequences connected to decisions. Such analysis is not well served by a linear progression of slides that makes it difficult to refer one idea to another, sparsely worded bits of text that don’t fully express an idea, and visual effects that are more distracting than enlightening. Rather than making things clear and simple, PowerPoint can strip the discussion of important nuance. In our meetings, even when a presenter included supporting information in the notes or accompanying audio, the PowerPoint presentation was never enough. Besides, the Amazon audience of tightly scheduled, experienced executives was eager to get to the heart of the matter as quickly as possible. They would pepper the presenter with questions and push to get to the punch line, regardless of the flow of slides. Sometimes the questions did not serve to clarify a point or move the presentation along but would instead lead the entire group away from the main argument. Or some questions might be premature and would be answered in a later slide, thus forcing the presenter to go over the same ground twice.


complex, interconnected, requiring plenty of information to explore, with greater and greater consequences connected to decisions. Such analysis is not well served by a linear progression of slides that makes it difficult to refer one idea to another, sparsely worded bits of text that don’t fully express an idea, and visual effects that are more distracting than enlightening.


PowerPoint can strip the discussion of important nuance. In our meetings, even when a presenter included supporting information in the notes or accompanying audio, the PowerPoint presentation was never enough.


Amazon audience of tightly scheduled, experienced executives was eager to get to the heart of the matter as quickly as possible. They would pepper the presenter with questions and push to get to the punch line, regardless of the flow of slides. Sometimes the questions did not serve to clarify a point or move the presentation along but would instead lead the entire group away from the main argument. Or some questions might be premature and would be answered in a later slide, thus forcing the presenter to go over the same ground twice.


serious presentations,” he wrote, “it will be useful to replace PowerPoint slides with paper handouts showing words, numbers, data graphics, images together. High-resolution handouts allow viewers to contextualize, compare, narrate, and recast evidence. In contrast, data-thin, forgetful displays tend to make audiences ignorant and passive, and also to diminish the credibility of the presenter.”


order: From now on your presentation software is Microsoft Word, not PowerPoint. Get used to it.”


The reason writing a good 4 page memo is harder than “writing” a 20 page powerpoint is because the narrative structure of a good memo forces better thought and better understanding of what’s more important than what, and how things are related. Powerpoint-style presentations somehow give permission to gloss over ideas, flatten out any sense of relative importance, and ignore the interconnectedness of ideas.


we’ve written one in a style we might submit today, if we were recommending for the first time that we use narratives instead of PowerPoint at S-Team meetings—a six-pager about six-pagers. Some of this is a pared-down version of what you’ve just read, which may help you see how we squeeze big ideas into the format of a true six-pager.


Example Memo

Dear PowerPoint: It’s Not You, It’s Us Our decision-making process simply has not kept up with the rapid growth in the size and complexity of our business. We therefore advocate that, effective immediately, we stop using PowerPoint at S-Team meetings and start using six-page narratives instead.

What’s Wrong with Using PowerPoint? S-Team meetings typically begin with a PowerPoint (PP) presentation that describes some proposal or business analysis for consideration. The style of the deck varies from team to team, but all share the constraints imposed by the PowerPoint format. No matter how complex or nuanced the underlying concepts, they are presented as a series of small blocks of text, short bullet-pointed lists, or graphics.

Even the most ardent PP fans acknowledge that too much information actually spoils the deck. Amazon’s bestselling book on PowerPoint describes three categories of slides:

  1. 75 words or more: A dense discussion document or white paper that is not suitable for a presentation—it’s better distributed in advance and read before the meeting.
  2. 50 words or so: A crutch for the presenter who uses it as a teleprompter, often turning away from an audience while reading aloud.
  3. Even fewer words: A proper presentation slide, used to visually reinforce primarily spoken content. The presenter must invest time to develop and rehearse this type of content.

One widely accepted rule of thumb, the so-called 6x6 Rule, sets a maximum of six bullet points, each with no more than six words. Other guidelines suggest limiting text to no more than 40 words per slide, and presentations to no more than 20 slides. The specific numbers vary, but the theme—limiting information density—is a constant. Taken as a whole, these practices point to a consensus: there’s only so much information one can fit into a PP deck without confusing, or losing, one’s audience. The format forces presenters to condense their ideas so far that important information is omitted.

Pressed against this functional ceiling, yet needing to convey the depth and breadth of their team’s underlying work, a presenter—having spent considerable time pruning away content until it fits the PP format—fills it back in, verbally. As a result, the public speaking skills of the presenter, and the graphics arts expertise behind their slide deck, have an undue—and highly variable—effect on how well their ideas are understood. No matter how much work a team invests in developing a proposal or business analysis, its ultimate success can therefore hinge upon factors irrelevant to the issue at hand.

We’ve all seen presenters interrupted and questioned mid-presentation, then struggle to regain their balance by saying things like, “We’ll address that in a few slides.” The flow becomes turbulent, the audience frustrated, the presenter flustered. We all want to deep dive on important points but have to wait through the whole presentation before being satisfied that our questions won’t be answered somewhere later on. In virtually every PP presentation, we have to take handwritten notes throughout in order to record the verbal give-and-take that actually supplies the bulk of the information we need. The slide deck alone is usually insufficient to convey or serve as a record of the complete argument at hand.

Our Inspiration Most of us are familiar with Edward Tufte, author of the seminal (and Amazon bestselling) book The Visual Display of Quantitative Information. In an essay titled “The Cognitive Style of PowerPoint: Pitching Out Corrupts Within,” Tufte encapsulates our difficulties precisely:

As analysis becomes more causal, multivariate, comparative, evidence based, and resolution-intense, the more damaging the bullet list becomes.

This certainly describes S-Team meetings: complex, interconnected, requiring plenty of information to explore, with greater and greater consequences connected to decisions. Such analysis is not well served by a linear progression of slides, a presentation style that makes it difficult to refer one idea to another, to fully express an idea in sparsely worded bits of text, and to enlighten instead of distract with visual effects. Rather than making things clear and simple, PowerPoint is stripping our discussions of important nuance.

Tufte’s essay proposes a solution. “For serious presentations,” he writes, “it will be useful to replace PowerPoint slides with paper handouts showing words, numbers, data graphics, images together. High-resolution handouts allow viewers to contextualize, compare, narrate, and recast evidence. In contrast, data-thin, forgetful displays tend to make audiences ignorant and passive, and also to diminish the credibility of the presenter.”

He goes on: “For serious presentations, replace PP with word-processing or page-layout software. Making this transition in large organizations requires a straightforward executive order: From now on your presentation software is Microsoft Word, not PowerPoint. Get used to it.” We’ve taken this recommendation to heart, and we now propose to follow his advice.

Our Proposal: Banish PP in Favor of Narratives We propose that we stop using PowerPoint in S-Team meetings immediately and replace it with a single narrative document. These narratives may sometimes include graphs and bulleted lists, which are essential to brevity and clarity, but it must be emphasized: merely reproducing a PP deck in written form will NOT be acceptable. The goal is to introduce the kind of complete and self-contained presentation that only the narrative form makes possible. Embrace it.

Our Tenet: Ideas, Not Presenters, Matter Most A switch to narratives places the team’s ideas and reasoning center stage, leveling the playing field by removing the natural variance in speaking skills and graphic design expertise that today plays too great a role in the success of presentations. The entire team can contribute to the crafting of a strong narrative, reviewing and revising it until it’s at its very best. It should go without saying—sound decisions draw from ideas, not individual performance skills.

The time now spent upon crafting gorgeous, graphically elegant slide presentations can be recaptured and used for more important things. We can give back the time and energy now wasted on rehearsing one’s time at the podium and relieve a major, unnecessary stressor for many team leaders. It won’t matter whether the presenter is a great salesperson, a complete introvert, a new hire out of college, or a VP with 20 years of experience; what matters will be found on the page.

Last, the narrative document is infinitely portable and scalable. It is easy to circulate. Anyone can read it at any time. You don’t need handwritten notes or a vocal track recorded during the big presentation to understand its contents. Anyone can edit or make comments on the document, and they are easily shared in the cloud. The document serves as its own record.

The Readers’ Advantage: Information Density and Interconnection of Ideas One useful metric for comparison is what we call the Narrative Information Multiplier (tip of the hat to former Amazon VP Jim Freeman for coining this term). A typical Word document, with text in Arial 11-point font, contains 3,000–4,000 characters per page. For comparison, we analyzed the last 50 S-Team PowerPoint slide presentations and found that they contained an average of just 440 characters per page. This means a written narrative would contain seven to nine times the information density of our typical PowerPoint presentation. If you take into account some of the other PowerPoint limitations discussed above, this multiplier only increases.

Tufte estimates that people read three times faster than the typical presenter can talk, meaning that they can absorb that much more information in a given time while reading a narrative than while listening to a PP presentation. A narrative therefore delivers much more information in a much shorter time.

The Narrative Information Multiplier is itself multiplied when one considers how many such meetings S-Team members attend in a single day. A switch to this denser format will allow key decision-makers to consume much more information in a given period of time than with the PowerPoint approach.

Narratives also allow for nonlinear, interconnected arguments to unfold naturally—something that the rigid linearity of PP does not permit. Such interconnectedness defines many of our most important business opportunities. Moreover, better-informed people make higher-quality decisions, and can deliver better, more detailed feedback on the presenting teams’ tactical and strategic plans. If our executives are better informed, at a deeper level, on a wider array of important company initiatives, we will gain a substantial competitive advantage over executives elsewhere who rely on traditional low-bandwidth methods of communication (e.g., PP).

The Presenters’ Advantage: Forces Greater Clarity of Thought We know that writing narratives will likely prove to be harder work than creating the PP presentations that they will replace; this is actually positive. The act of writing will force the writer to think and synthesize more deeply than they would in the act of crafting a PP deck; the idea on paper will be better thought out, especially after the author’s entire team has reviewed it and offered feedback. It’s a daunting task to get all the relevant facts and all one’s salient arguments into a coherent, understandable document—and it should be.

Our goal as presenters is not to merely introduce an idea but to demonstrate that it’s been carefully weighed and thoroughly analyzed. Unlike a PP deck, a solid narrative can—and must—demonstrate how its many, often disparate, facts and analyses are interconnected. While an ideal PP presentation can do this, experience has shown that they rarely do in practice.

A complete narrative should also anticipate the likely objections, concerns, and alternate points of view that we expect our team to deliver. Writers will be forced to anticipate smart questions, reasonable objections, even common misunderstandings—and to address them proactively in their narrative document. You simply cannot gloss over an important topic in a narrative presentation, especially when you know it’s going to be dissected by an audience full of critical thinkers. While this may seem a bit intimidating at first, it merely reflects our long-standing commitment to thinking deeply and correctly about our opportunities.

The old essay-writing adage “State, support, conclude” forms the basis for putting a convincing argument forward. Successful narratives will connect the dots for the reader and thus create a persuasive argument, rather than presenting a disconnected stream of bullet points and graphics that leave the audience to do all the work. Writing persuasively requires and enforces clarity of thought that’s even more vital when multiple teams collaborate on an idea. The narrative form demands that teams be in sync or, if they are not, that they clearly state in the document where they are not yet aligned.

Edward Tufte sums up the benefits of narratives over PP with his own blunt clarity: “PowerPoint becomes ugly and inaccurate because our thoughts are foolish, but the slovenliness of PowerPoint makes it easier for us to have foolish thoughts.”

How to Conduct a Meeting in This New Format Narratives would be distributed at the start of each meeting and read by all in attendance during the time normally taken up by the slide deck—approximately the first 20 minutes. Many will want to take notes, or annotate their copy, during this time. Once everybody signals their readiness, conversation about the document begins.

We know that people read complex information at the rough average of three minutes per page, which in turn defines the functional length of a written narrative as about six pages for a 60-minute meeting. Our recommendation is therefore that teams respect the six-page maximum. There will no doubt be times when it feels difficult to condense a complete presentation into this size, but the same limitation—which is really one of meeting lengths—faces PP presenters as well. We believe that six pages should be enough, but we will review over time and revise if necessary.

Conclusion PowerPoint could only carry us so far, and we’re thankful for its service, but the time has come to move on. Written narratives will convey our ideas in a deeper, stronger, more capable fashion while adding a key additional benefit: they will act as a forcing function that shapes sharper, more complete analysis. Six-page narratives are also incredibly inclusive communication, precisely because the interaction between the presenter and audience is zero during reading. No biases matter other than the clarity of reasoning. This change will strengthen not just the pitch, but the product—and the company—as well.

FAQ Q: Most other companies of our size use PowerPoint. Why do we need to be different, and what if this switch turns out to be the wrong move? A: In simplest terms, we see a better way. Amazon differs from other major companies in ways that help us stand out, including our willingness to go where the data lead and seek better ways of doing familiar things. If this move doesn’t work out, we’ll do what we always do—iterate and refine, or roll it back entirely if that’s what the results show us is best.

Q: Why not distribute the narrative ahead of the meeting so we’re ready? A: The short time between distribution and the meeting might not give all attendees sufficient time for that task. Also, since the document replaces the deck, no time is lost by dedicating this phase of the meeting to a silent reading that brings everybody up to speed before Q&A begins. Last but certainly not least, this gives each presenting team the most possible time to complete and refine their presentation.

Q: My team has proven to be very good at PP presentations—do we HAVE to switch? A: YES. One danger of an unusually strong PP presentation is that the stage presence or charm of the presenter can sometimes unintentionally blind the audience to key questions or concerns. Slick graphics can distract equally well. Most importantly, we’ve shown that even the best use of PP simply cannot deliver the completeness and sophistication that narratives can.

Q: What if we put our PP deck into printed form and add some extended comments to strengthen and extend the information content? A: NO. Reproducing PP on paper also reproduces its weaknesses. There’s nothing one can do in PP that cannot be done more thoroughly, though sometimes less attractively, in a narrative.

Q: Can we still use graphs or charts in our narratives? A: YES. Most complex issues derive key insights from data and we expect that some of that data may be best represented in the form of a chart or graph. However, we do not expect that graphics alone can make the compelling and complete case we expect from a true written narrative. Include them if you must, but don’t let graphics predominate.

Q: Six pages feels short. How much can we fit onto a page? A: The six-page limit acts as a valuable forcing function that ensures we only discuss the most important issues. We also set aside 20 minutes for reading and expect that every attendee can read the entire thing during that time. Please don’t fall prey to the temptation to fiddle with margins or font size to squeeze more into the document. Adding density to stay under the six-page limit works against this goal and tempts writers to stray into less important areas of consideration.

Q: How will we measure the success of this change? A: Great question. We have not been able to identify a quantitative way to measure the quality of a series of S-Team decisions today, nor are we proposing a metric at this time. Comparing the two approaches will be a qualitative exercise. We propose implementing narratives for the next three months and then polling the S-Team to ask if they’re making better-informed decisions.


Six-Pagers Vary in Structure and Content

Two optional sections

  1. call out one or more key tenets that our proposal relies upon—a foundational element of the reasoning that led us to make this recommendation. Tenets give the reader an anchor point from which to evaluate the rest.
  2. perhaps more commonly used, is the inclusion of an FAQ. Strong six-pagers don’t just make their case, they anticipate counterarguments, points of contention, or statements that might be easily misinterpreted. Adding the FAQ to address these saves time and gives the reader a useful focal point for checking the thoroughness of the authors’ thinking.

Six-page narratives can take many forms.

An Amazon quarterly business review, for instance, might be broken down like this instead: Introduction Tenets Accomplishments Misses Proposals for Next Period Headcount P&L FAQ Appendices (includes things like supporting data in the form of spreadsheets, tables and charts, mock-ups)

Some attendees will make comments in a shared online document, like Bill does, so that all meeting participants can see everyone’s comments. I (Colin) prefer the old-fashioned way, making comments on paper so I can lose myself in the document. This also helps me avoid the confirmation bias that might arise were I to read the real-time comments others were adding to the shared document.

When everyone has read the document, the presenter takes the floor. First-time presenters often start by saying, “Let me orally walk you through the document.” Resist that temptation; it will likely be a waste of time.

Some groups at Amazon go around the room, ask for high-level feedback, then pore over the document line by line. Other groups ask a single individual to give all their feedback on the entire document, then ask the next person in the audience to do the same. Just pick a method that works for you—there’s no single correct approach.

The key goal of the meeting, after all, is to seek the truth about the proposed idea or topic. We want that idea to become the best it can possibly be as a result of any adjustments

The presenter is generally too involved in answering questions to capture effective notes at the same time. If I don’t see anyone taking notes at the discussion stage, I will politely pause the meeting and ask who is going to do so.

Feedback as Collaboration Providing valuable feedback and insight can prove to be as difficult as writing the narrative itself.

when the reader takes the narrative process just as seriously as the writer does, the comments can have real, significant, and long-lasting impact. You are not just commenting on a document, you’re helping to shape an idea, and thereby becoming a key team member for that business.

he assumes each sentence he reads is wrong until he can prove otherwise. He’s challenging the content of the sentence, not the motive of the writer. Jeff, by the way, was usually among the last to finish reading.

“Our customer-friendly returns policy allows returns up to 60 days from the time of purchase compared to the 30 days typically offered by our competitors.” A busy executive doing a cursory read and already thinking about their next meeting may be content with that statement and move on. However, a critical reader would challenge the implicit assumption being made, namely, that the longer allowable return duration makes the policy customer friendly. The policy may be better than a competitor’s, but is it actually customer friendly? Then during the discussion, the critical reader may ask, “If Amazon is really customer obsessed, why do we penalize the 99 percent of customers who are honest and want to return an item by making them wait until our returns department receives the item to make sure it’s the right item and that it’s not damaged?” This type of thinking—in which you assume there is something wrong with the sentence—led Amazon to create the no-hassle return policy, which specifies that the customer should get a refund even before Amazon receives the returned goods. (The refund is reversed for the small percentage of people who do not send back the item.)


Working Backwards

Most of Amazon’s major products and initiatives since 2004 have one very Amazonian thing in common—they were created through a process called Working Backwards.

Working Backwards is a systematic way to vet ideas and create new products. Its key tenet is to start by defining the customer experience, then iteratively work backwards from that point until the team achieves clarity of thought around what to build. Its principal tool is a second form of written narrative called the PR/FAQ, short for press release/frequently asked questions.

In the end, what turned out to work best was relying on the core Amazon principle of customer obsession and a simple yet flexible way of writing narrative documents. These two elements form the Working Backwards process—starting from the customer experience and working backwards from that by writing a press release that literally announces the product as if it were ready to launch and an FAQ anticipating the tough questions.

The first part of the process went as normal. Our team of three or four people developed plans using the tried-and-true MBA-style methods of the time. We gathered data about the size of the market opportunity. We constructed financial models projecting our annual sales in each category, assuming, of course, an ever-increasing share of digital sales. We calculated gross margin assuming a certain cost of goods from our suppliers. We projected an operating margin based on the size of the team we would need to support the business. We outlined the deals we would make with media companies. We sketched out pricing parameters. We described how the service would work for customers. We put it all together in crisp-looking PowerPoint slides (this was still several months before the switch to narratives) and comprehensive Excel spreadsheets.

Jeff wanted to know exactly what we were going to build and how it would be better for customers than the competition. He wanted us to agree on those details before we started hiring a team or establishing vendor relationships or building anything.

Jeff suggested a different approach for the next meeting. Forget the spreadsheets and slides, he said. Instead, each team member would write a narrative document. In it, they would describe their best idea for a device or service for the digital media business.

We distributed them and read them to ourselves and then discussed them, one after another. One proposed an e-book reader that would use new E Ink screen technology. Another described a new take on the MP3 player. Jeff wrote his own narrative about a device he called the Amazon Puck. It would sit on your countertop and could respond to voice commands like, “Puck. Please order a gallon of milk.” Puck would then place the order with Amazon.

The great revelation of this process was not any one of the product ideas. As we’ve described in chapter four, the breakthrough was the document itself. We had freed ourselves of the quantitative demands of Excel, the visual seduction of PowerPoint, and the distracting effect of personal performance. The idea had to be in the writing. Writing up our ideas was hard work. It required us to be thorough and precise. We had to describe features, pricing, how the service would work, why consumers would want it. Half-baked thinking was harder to disguise on the written page than in PowerPoint slides.

After we started using the documents, our meetings changed. There was more meat and more detail to discuss, so the sessions were livelier and longer. We weren’t so focused on the pro forma P&L and projected market segment share. We talked at length about the service itself, the experience, and which products and services we thought would appeal most to the customer.


What if we thought of the product concept narrative as a press release? Usually, in a conventional organization, a press release is written at the end of the product development process. The engineers and product managers finish their work, then “throw it over the wall” to the marketing and sales people, who look at the product from the customer point of view, often for the first time. They’re the ones who write the press release, which describes the killer features and fantastic benefits and is designed to create buzz, capture attention, and, above all, get customers to leap out of their chairs to buy.

Usually, in a conventional organization, a press release is written at the end of the product development process. The engineers and product managers finish their work, then “throw it over the wall” to the marketing and sales people, who look at the product from the customer point of view, often for the first time. They’re the ones who write the press release, which describes the killer features and fantastic benefits and is designed to create buzz, capture attention, and, above all, get customers to leap out of their chairs to buy.

In this standard process, the company works forward. The leaders come up with a product or business that is great for the company, and then they try to shoehorn it into meeting previously unmet customer needs.

The Kindle Press Release Kindle was the first product offered by the digital media group, and it, along with several AWS products, was among the first at Amazon to be created using the press release approach.

When we wrote a Kindle press release and started working backwards, everything changed. We focused instead on what would be great for customers. An excellent screen for a great reading experience. An ordering process that would make buying and downloading books easy. A huge selection of titles. Low prices. We would never have had the breakthroughs necessary to achieve that customer experience were it not for the press release process, which forced the team to invent multiple solutions to customer problems.

The FAQ section, as it developed, included both external and internal questions. External FAQs are the ones you would expect to hear from the press or customers. “Where can I purchase a new Amazon Echo?” or “How does Alexa work?” Internal FAQs are the questions that your team and the executive leadership will ask. “How can we make a 44-inch TV with an HD display that can retail for $1,999 at a 25 percent gross margin?” or “How will we make a Kindle reader that connects to carrier networks to download books without customers having to sign a contract with a carrier?” or “How many new software engineers and data scientists do we need to hire for this new initiative?” In other words, the FAQ section is where the writer shares the details of the plan from a consumer point of view and addresses the various risks and challenges from internal operations, technical, product, marketing, legal, business development, and financial points of view.

hire for this new initiative?” In other words, the FAQ section is where the writer shares the details of the plan from a consumer point of view and addresses the various risks and challenges from internal operations, technical, product, marketing, legal, business development, and financial points of view.

The Features and Benefits of the PR/FAQ The primary point of the process is to shift from an internal/company perspective to a customer perspective.

The PR/FAQ process creates a framework for rapidly iterating and incorporating feedback and reinforces a detailed, data-oriented, and fact-based method of decision-making.

Over time, we refined and normalized the specifications for the PR/FAQ. The press release (PR) portion is a few paragraphs, always less than one page. The frequently asked questions (FAQ) should be five pages or less. There are no awards for extra pages or more words. The goal isn’t to explain all the excellent work you have done but rather to share the distilled thinking that has come from that work.


The creation of the PR/FAQ starts with the person who originated either the idea or the project writing a draft. When it’s in shareable condition, that person sets up a one-hour meeting with stakeholders to review the document and get feedback. At the meeting, they distribute the PR/FAQ in either soft or hard copy, and everyone reads it to themselves. When they have finished, the writer asks for general feedback. The most senior attendees tend to speak last, to avoid influencing others.

Once everyone has given their high-level responses, the writer asks for specific comments, line by line, paragraph by paragraph. This discussion of the details is the critical part of the meeting. People ask hard questions. They engage in intense debate and discussion of the key ideas and the way they are expressed. They point out things that should be omitted or things that are missing.

After the meeting, the writer distributes meeting minutes to all the attendees, including notes on the feedback. Then they get to work on the revision, incorporating responses to the feedback. When it is polished, they present it to the executive leaders in the company. There will be more feedback and discussion. More revision and more meetings may be required.


PR Newswire, Atlanta, GA, November 5, 2019

Today Blue Corp. announced the launch of Melinda, a smart mailbox that ensures secure and properly chilled delivery and storage for your online purchases and groceries. With Melinda, you no longer need to worry about getting your deliveries stolen from your doorstep or spoiled groceries. Plus, you’re notified as soon as your packages are delivered. Packed with smart technology, Melinda costs just $299.

Today, 23 percent of online shoppers report having packages stolen from their front porch, and 19 percent complain of grocery deliveries being spoiled. With no easy solution to these problems, customers give up and stop ordering online.

Melinda, with its smart technology and insulation, makes stolen packages and spoiled groceries a thing of the past. Each Melinda includes a camera and a speaker. When a delivery courier arrives at your home, Melinda tells the courier to scan the package barcode by holding it up to the camera. If the code is valid, the front door opens and Melinda instructs the courier to place the package inside and close the door securely. The built-in scale in the base of each Melinda verifies that the weight of the delivery matches the weight of the item(s) you ordered. The courier receives a voice confirmation, and your purchase is safe and secure. Melinda sends you a text letting you know that your item arrived along with a video of the courier making the delivery.

When you return home and are ready to retrieve your delivery, just use the built-in fingerprint reader to unlock the door. Melinda can store and recognize up to ten saved fingerprints so that all members of your family can access Melinda.

Do you use Instacart, Amazon, or Walmart for online grocery delivery? If so, are you tired of spoiled groceries in the hot sun? Melinda keeps your chilled and frozen food cold. The walls of Melinda are two inches thick and made with the same pressure-injected foam used in the best coolers, keeping your groceries cool for up to twelve hours.

Melinda fits easily on your porch or stoop, taking up just a few feet of space, and you can choose from a variety of colors and finishes to make Melinda an attractive addition to the appearance of your home.

“Melinda is a breakthrough in safety and convenience for online shoppers,” says Lisa Morris, CEO of Blue Corp. “In creating Melinda we combined a number of the latest technologies at the low price of just $299.”

“Melinda is a lifesaver,” said Janet Thomas, a frequent online shopper and customer of Instacart. “It is so frustrating when one of my packages is stolen from my front porch, and it can be time-consuming to work with customer support to get a refund. I use Instacart every week for grocery delivery, and many times I am not home when my groceries arrive. I love knowing that they are kept cool and secure in my Melinda. I selected the natural teak finish for my Melinda—it looks great on my front porch.”

To order your Melinda, simply visit keepitcoolmelinda.com, or visit amazon.com, walmart.com, Walmart stores, and other leading retailers.

Internal FAQs Q: How large is the estimated consumer demand for Melinda? A: Based on our research, we estimate that ten million households in the United States, Europe, and Asia would want to buy Melinda at a $299 price point.

Q: Why is $299 the right price point? A: There are no directly comparable products in the marketplace today. One similar product is Amazon Key, which allows couriers access to your home, garage, or car using smart lock technology. Another similar product is Ring Doorbell, which ranges in price from $99 to $499. We based our price on customer surveys and focus groups combined with the price needed to ensure profitability.

Q: How does Melinda recognize barcodes on packages? A: We will license barcode-scanning technology from Green Corp. at a cost of $100K per year. In addition, we need to develop an API that will allow us to link a Melinda customer account with any e-commerce provider (Amazon, Walmart, eBay, OfferUp, etc.), which provides us with the item tracking number from the e-commerce or delivery merchant. This way we can recognize the barcode with the package tracking number and know either the exact or an estimated weight for each item.

Q: What if a customer receives an order from an e-commerce provider and they haven’t linked their account yet? A: We make it easy for customers to link their orders because we will offer a browser plug-in for Melinda customers that detects when they place an order with an e-commerce provider, which then links their account and the order details to their Melinda.

Q: Why will e-commerce providers like Amazon and Walmart be willing to share these package delivery details with us? What is in it for them? A: We believe we can convince them that the customer experience benefits will enable them to increase their sales. In addition, we will work closely with their business and legal groups to ensure that we handle their customer data in ways that meet their stringent requirements. Alternatively, we will offer a simple UI for customers to copy and paste each tracking number from their e-commerce provider to the Melinda app.

Q: What happens if a customer gets more than one delivery in a day? A: Melinda can accept multiple deliveries each day until the unit is full.

Q: What if the package is too big for Melinda? A: Packages exceeding 2′x2′x4′ won’t fit in Melinda. Melinda can still record the delivery person and scan the barcode, but the item is stored outside Melinda.

Q: How does Melinda prevent a courier from stealing items that are already in Melinda from a prior order? A: There are several ways. The first is that the forward-facing camera records any activity or access to Melinda. The second is that there is a scale at the base of the unit that detects the weight of the shipment and verifies that this matches the item(s) ordered. If a second delivery is made in one day, Melinda knows the weight of the first delivery and the estimated weight of the second delivery, so if the net weight is lower, Melinda knows that the courier has removed something and will sound an alarm.

Q: What is the estimated bill of materials (BOM) or cost to manufacture each Melinda, and how much profit will we make per unit? A: The estimated BOM is $250 for each Melinda, meaning that our gross profit per unit is $49. The most expensive parts in Melinda are the shell and insulation ($115), the fingerprint reader ($49), and the scale.

Q: What is the power source for Melinda? A: Melinda requires a standard AC outlet.

Q: What size team is required to build Melinda? A: We estimate that we need a team of 77 at an annualized cost of $15 million. There are several teams required to build Melinda, but these can be broken down into hardware and software teams. On the hardware side, we need a team for each of the following: The physical shell, color choices, and finishes (6) Integration of the various smart and mechanical components, including the fingerprint reader, the camera, the automatic (open/close) door, the speaker, and the camera (12) On the software side, we will need a team for each of the new services. Below is our current assessment of what teams will be required and how many people should be on each team, including product managers, engineers, designers, and so on: Voice commands to couriers (10) Fingerprint capture and storing (8) Package tracking and item weight details (11) Barcode reader (7) API to link e-commerce accounts to Melinda (12) Browser plug-in/web interface for account linking (5) Melinda app for iOS and Android (6)


Press Release Components

These are the key elements of the press release:

Heading: Name the product in a way the reader (i.e., your target customers) will understand. One sentence under the title. “Blue Corp. announces the launch of Melinda, the smart mailbox.”

Subheading: Describe the customer for the product and what benefits they will gain from using it. One sentence only underneath the heading. “Melinda is the physical mailbox designed to securely receive and keep safe all your e-commerce and grocery deliveries.”

Summary Paragraph: Begin with the city, media outlet, and your proposed launch date. Give a summary of the product and the benefit. “PR Newswire, Atlanta, GA, November 5, 2019. Today Blue Corp. announced the launch of Melinda, a smart mailbox that ensures secure and properly chilled delivery and storage for your online purchases and groceries.”

Problem Paragraph: This is where you describe the problem that your product is designed to solve. Make sure that you write this paragraph from the customer’s point of view. “Today, 23 percent of online shoppers report having packages stolen from their front porch, and 19 percent complain of grocery deliveries being spoiled.”

Solution Paragraph(s): Describe your product in some detail and how it simply and easily solves the customer’s problem. For more complex products, you may need more than one paragraph. “With Melinda, you no longer need to worry about getting your online purchases and deliveries stolen…”

Quotes and Getting Started: Add one quote from you or your company’s spokesperson and a second quote from a hypothetical customer in which they describe the benefit they are getting from using your new product. Describe how easy it is to get started, and provide a link to your website where customers can get more information and purchase the product. “Melinda is a breakthrough in safety and convenience for online shoppers…”

FAQ Components

Unlike the PR, the FAQ section has a more free-form feel to it—there are no mandatory FAQs. The PR section does not typically include visuals, but it is more than appropriate to include tables, graphs, and charts in the FAQ. You must include things like your pro forma P&L for a new business or product. If you have high-quality mock-ups or wireframes, they can be included as an appendix.

Often FAQs are divided into external (customer focused) and internal (focused on your company). The external FAQs are those that customers and/or the press will ask you about the product. These will include more detailed questions about how the product works, how much it costs, and how/where to buy it. Because these questions are product specific, they are unique to an individual PR/FAQ. For internal FAQs, there is a more standardized list of topics you will need to cover. Here are some of the typical areas to address.

Consumer Needs and Total Addressable Market (TAM)

These consumer questions will enable you to identify the core customers by filtering out those who don’t meet the product constraints. In the case of Melinda, for example, you would eliminate people who:

Only a discrete number of people will pass through all these filters and be identified as belonging to the total addressable market.

Research into these questions (e.g., how many detached homes are there in a given area?) can help you estimate the total addressable market (TAM), but like any research, there will be a wide error bar. The author and readers of the PR/FAQ will ultimately have to decide on the size of the TAM based on the data gathered and their judgment about its relevance. With Melinda, this process would likely lead to the conclusion that the TAM is in fact pretty small.

Economics and P&L

For this section of the PR/FAQ, ideally one or more members of your finance team will work with you to understand and capture these costs so you can include a simplified table of the per-unit economics and a mini P&L in the document. A resourceful entrepreneur or product manager can do this work themselves if they do not have a finance manager or team.

For new products, the up-front investment is a major consideration. In the case of Melinda, there is a requirement for 77 people to work on the hardware and software, for an annualized cost of roughly $15 million. This means that the product idea needs to have the potential to earn well in excess of $15 million per year in gross profit to be worth building.

The consumer questions and economic analysis both have an effect on the product price point, and that price point, in turn, has an effect on the size of the total addressable market.

Price is a key variable in the authoring of your PR/FAQ. There may be special assumptions or considerations that have informed your calculation of the price point—perhaps making it relatively low or unexpectedly high—that need to be called out and explained. Some of the best new product proposals set a not-to-exceed price point because it forces the team to innovate within that constraint and face the tough trade-offs early on. The problem(s) associated with achieving that price point should be fully explained and explored in the FAQ. Suppose your research into Melinda leads you to conclude that to realize the largest possible TAM, you need to offer the product at no more than $99. The bill of materials (BOM), however, comes to $250. Now you have two choices to suggest. First, alter the specs, strip out features, or take other actions that will reduce the BOM to below $99. Second, construct a financial plan that shows heavy losses in the early days of release, but also shows that the losses can eventually be mitigated with BOM reductions as the product achieves scale or can be enhanced with some additional source of revenue (e.g., an associated service or subscription).

Dependencies

A common mistake among less-seasoned product managers is to not fully consider how third parties who have their own agendas and incentives will interact with their product idea, or what potential regulatory or legal issues might arise.

The role of third parties is a major issue with Melinda, whose success largely depends on their involvement and proper execution. Without the correct package tracking data or the cooperation of the companies that own that data and the couriers who deliver the packages, Melinda (as described) would be useless. The only alternative would be for customers to manually enter their tracking information for every single delivery into the Melinda app, which they are unlikely to do—and even if they did, it would still require couriers to be willing and able to use it. A good PR/FAQ honestly and accurately assesses these dependencies and describes the specific concepts or plans for the product to solve them.

Feasibility

These questions are intended to help the author clarify to the reader what level of invention is required and what kind of challenges are involved in building this new product. These criteria vary from product to product, and there are different types of challenges ranging from technical to legal to financial to third-party partnerships and customer UI or acceptance.

With Melinda, the engineering challenges are probably quite manageable, since no new technologies need to be developed or employed. The user interface is also familiar. The third-party dependencies present the greatest challenge to making Melinda work. Go Ahead?


share price is what Amazon calls an “output metric.” The CEO, and companies in general, have very little ability to directly control output metrics. What’s really important is to focus on the “controllable input metrics,” the activities you directly control, which ultimately affect output metrics such as share price.


Staying Close to the Business

Some business-critical information, such as number of new customers and sales by category, was simply there for the taking and easy to collect. But there were other kinds of information that we could only produce with a series of bespoke ad hoc reports.

Define First, you need to select and define the metrics you want to measure. The right choice of metrics will deliver clear, actionable guidance.

Before you can improve any system … you must understand how the inputs affect the outputs of the system. You must be able to change the inputs (and possibly the system) in order to achieve the desired results. This will require a sustained effort, constancy of purpose, and an environment where continual improvement is the operating philosophy.2 Amazon takes this philosophy to heart, focusing most of its effort on leading indicators (we call these “controllable input metrics”) rather than lagging indicators (“output metrics”). Input metrics track things like selection, price, or convenience—factors that Amazon can control through actions such as adding items to the catalog, lowering cost so prices can be lowered, or positioning inventory to facilitate faster delivery to customers. Output metrics—things like orders, revenue, and profit—are important, but they generally can’t be directly manipulated in a sustainable manner over the long term. Input metrics measure things that, done right, bring about the desired results in your output metrics.


The Flywheel: Input Metrics Lead to Output Metrics and Back Again

In 2001 Jeff drew the simple diagram below on a napkin to illustrate Amazon’s virtuous cycle, also called the “Amazon flywheel.” This sketch, inspired by the flywheel concept in Jim Collins’s book Good to Great, is a model of how a set of controllable input metrics drives a single key output metric—in this case, growth. In this closed-loop system, as you inject energy into any one element, or all of them, the flywheel spins faster:

improve customer experience: Better customer experience leads to more traffic. More traffic attracts more sellers seeking those buyers. More sellers lead to wider selection. Wider selection enhances customer experience, completing the circle. The cycle drives growth, which in turn lowers cost structure. Lower costs lead to lower prices, improving customer experience, and the flywheel spins faster.

Metrics

  1. Identify the Correct, Controllable Input Metrics This step sounds easy but can be deceptively tricky, and the details matter. One mistake we made at Amazon as we started expanding from books into other categories was choosing input metrics focused around selection, that is, how many items Amazon offered for sale. Each item is described on a “detail page” that includes a description of the item, images, customer reviews, availability (e.g., ships in 24 hours), price, and the “buy” box or button. One of the metrics we initially chose for selection was the number of new detail pages created,

Once we identified this metric, it had an immediate effect on the actions of the retail teams. They became excessively focused on adding new detail pages—

We soon saw that an increase in the number of detail pages, while seeming to improve selection, did not produce a rise in sales, the output metric.

When we realized that the teams had chosen the wrong input metric—which was revealed via the WBR process—we changed the metric to reflect consumer demand instead. Over multiple WBR meetings, we asked ourselves, “If we work to change this selection metric, as currently defined, will it result in the desired output?” As we gathered more data and observed the business, this particular selection metric evolved over time from number of detail pages, which we refined to number of detail page views (you don’t get credit for a new detail page if customers don’t view it), which then became the percentage of detail page views where the products were in stock (you don’t get credit if you add items but can’t keep them in stock), which was ultimately finalized as the percentage of detail page views where the products were in stock and immediately ready for two-day shipping, which ended up being called Fast Track In Stock.

You’ll notice a pattern of trial and error with metrics in the points above, and this is an essential part of the process.

big mistake people make is not getting started. Most WBRs have humble beginnings and undergo substantial changes and improvement over time.

Measure Building tools to collect the metrics data you need may sound rather simple, but—like choosing the metrics themselves—we’ve found that it takes time and concerted effort to get the collection tools right.

The first metric is inward-facing and operations-centric, while the second metric is outward-facing and customer-centric. Start with the customer and work backwards by aligning your metrics with the customer experience.

One often-overlooked piece of the puzzle is determining how to audit metrics. Unless you have a regular process to independently validate the metric, assume that over time something will cause it to drift and skew the numbers.

Until you know all the external factors that impact the process, it will be difficult to implement positive changes. The objective in this stage is separating signals from noise in data and then identifying and addressing root causes.

Improve Once you have developed a solid understanding of how your process works along with a robust set of metrics, you can devote energy to improving the process.

Control This final stage is all about ensuring that your processes are operating normally and performance is not degrading over time.

The WBR: Metrics at Work At Amazon

The Weekly Business Review (WBR) is the place where metrics are put into action.

The Deck Each meeting begins with the virtual or printed distribution of the data package, which contains the weekly snapshot of graphs, tables, and occasional explanatory notes for all your metrics.

The deck represents a data-driven, end-to-end view of the business.

It’s mostly charts, graphs, and data tables.

Emerging patterns are a key point of focus. Individual data points can tell useful stories, especially when compared to other time periods. In the WBR, Amazon analyzes trend lines to highlight challenges as they emerge

Graphs plot results against comparable prior periods. Metrics are intended to trend better over time.

Graphs show two or more timelines, for example, trailing 6-week and trailing 12-month.

Anecdotes and exception reporting are woven into the deck.

The Meeting What happens inside the WBR is critical execution not normally visible outside the company. A well-run WBR meeting is defined by intense customer focus, deep dives into complex challenges, and insistence on high standards and operational excellence

at what level is it appropriate for executives to shift focus to output metrics?

the focus does not shift at any level of management.

everyone from the individual contributor to the CEO must have detailed knowledge of input metrics to know whether the organization is maximizing outputs.

We use consistent and familiar formatting to speed interpretation A good deck uses a consistent format throughout—the graph design, time periods covered, color palette, symbol set (for current year/prior year/goal), and the same number of charts on every page wherever possible. Some data naturally lend themselves to different presentations, but the default is to display in the standard format.

We focus on variances and don’t waste time on the expected

If things are operating normally, say “Nothing to see here” and move along. The goal of the meeting is to discuss exceptions and what is being done about them. The status quo needs no elaboration.

Our business owners own metrics and are prepared to explain variances

the owners, not the finance team, are expected to provide a crisp explanation for variances against expectations.

We keep operational and strategic discussions separate The WBR is a tactical operational meeting to analyze performance trends of the prior week. At Amazon, it was not the time to discuss new strategies, project updates, or upcoming product releases.

We try not to browbeat (it’s not the Inquisition) It’s okay to dig into a meaningful variation that needs more attention,

Zooming In: Weekly and Monthly Metrics on a Single Graph As we noted above, at Amazon we routinely place our trailing 6 weeks and trailing 12 months side by side on the same x-axis. The effect is like adding a “zoom” function to a static graph that gives you a snapshot of a shorter time period, with the added bonus that you’re seeing both the monthly graph and the “zoomed-in” version of it simultaneously.

Why We Watch Year-over-Year (YOY) Trends

Output Metrics Show Results. Input Metrics Provide Guidance. There’s another familiar lesson in this graph: output metrics—the data we graphed above—are far poorer indicators of trend causes than input metrics. It turned out in this case that the cause of our decelerating growth was a reduction in the rate of acquiring new customers—but nothing in these graphs gives any clue to that cause.

Data Combined with Anecdote to Tell the Whole Story Numerical data become more powerful when combined with real-life customer stories.

anecdotes reach the teams that own and operate a service. One example is a program called the Voice of the Customer. The customer service department routinely collects and summarizes customer feedback and presents it during the WBR, though not necessarily every week. The chosen feedback does not always reflect the most commonly received complaint, and the CS department has wide latitude on what to present.


Amazon has a program called Customer Connection,

Every two years the corporate employee is required to become a customer service agent for a few days. The employee gets some basic refresher training from a CS agent, listens in on calls, watches email/chat interactions, and then handles some customer contacts directly. Once they learn the tools and policies, they perform some or all of those tasks under the supervision of a CS agent.

Jeff had recently been learning about how Toyota approached quality control and continuous improvement. One technique they used in their automobile assembly line was the Andon Cord. The car-in-progress moves along the line, and each employee adds a part or performs a task. When any worker notices a quality problem, they are authorized to pull a cord that stops the entire assembly line. A team of specialists swarms to the cord-puller’s station, troubleshoots the issue, and develops a fix so the error never happens again.


Frugality

with a limited budget, you can be successful over time if your approach is patient and frugal. Being Amazonian means approaching invention with long-term thinking and customer obsession, ensuring that the Leadership Principles guide the way, and deploying the practices to drive execution. “Long-term thinking levers our existing abilities and lets us do new things we couldn’t otherwise contemplate,” Jeff wrote. “Long-term orientation interacts well with customer obsession. If we can identify a customer need and if we can further develop conviction that that need is meaningful and durable, our approach permits us to work patiently for multiple years to deliver a solution.”2 Key word: patiently.

The other key is frugality. You can’t afford to pursue inventions for very long if you spend your money on things that don’t lead to a better customer experience, like trade show booths, big teams, and splashy marketing campaigns. Amazon Music and Prime Video are examples of how we kept our investment manageable for many years by being frugal: keeping the team small, staying focused on improving the customer experience, limiting our marketing spend, and managing the P&L carefully.


The magnitude of your inventions, and therefore your mistakes, needs to grow in lockstep with the growth of your organization. If it doesn’t, your inventions will likely not be big enough to move the needle.

As a company grows larger, it can become more difficult to keep the invention machine humming, and one impediment is “one-size-fits-all” decision-making.

Two-Way Door

“Some decisions are consequential and irreversible or nearly irreversible—one-way doors—and these decisions must be made methodically, carefully, slowly, with great deliberation and consultation. If you walk through and don’t like what you see on the other side, you can’t get back to where you were before. We can call these Type 1 decisions. But most decisions aren’t like that—they are changeable, reversible—they’re two-way doors. If you’ve made a suboptimal Type 2 decision, you don’t have to live with the consequences for that long. You can reopen the door and go back through. Type 2 decisions can and should be made quickly by high judgment individuals or small groups.”

Prime was a two-way door decision. If Prime’s particular combination of subscription, free shipping, and quick delivery had not worked, we’d have kept tinkering with the formula until we got it right.

The Fire Phone, on the other hand, was more of a one-way door decision: upon withdrawing it from the market, Amazon did not turn around and say, “Okay, that happened, now let’s try another phone.”


Kindle

fiscal 2004, Apple sold 4.4 million iPods—about four times more than the prior year—and the proliferation of shared digital music files online had already prompted a decline in sales of music CDs. It seemed only a matter of time before sales of physical books and DVDs would decline as well, replaced by digital downloads.

first action was not a “what” decision, it was a “who” and “how” decision. This is an incredibly important difference. Jeff did not jump straight to focusing on what product to build, which seems like the straightest line from A to B. Instead, the choices he made suggest he believed that the scale of the opportunity was large and that the scope of the work required to achieve success was equally large and complex. He focused first on how to organize the team and who was the right leader to achieve the right result. Though the shift to digital was already beginning to happen, no one could predict when the tide would really turn. No one wanted to get in too early, with a product that did not yet have a market. But no one wanted to miss the moment either and be unable to catch up.

What I didn’t get was why Steve and I had to change jobs and build up a whole new organization. Why couldn’t we manage digital media as part of what we were already doing? After all, we would be working with the same partners and suppliers. The media had to come from somewhere and that somewhere was media companies: book publishers, record companies, and motion picture studios. I already managed the co-op marketing relationships with those companies, so it made sense that we should do this within the same organization and build off the knowledge and success of our strong team. Otherwise, Amazon would have two different groups responsible for business relationships with partners and suppliers. But Jeff felt that if we tried to manage digital media as a part of the physical media business, it would never be a priority. The bigger business carried the company after all, and it would always get the most attention.

But Jeff felt that if we tried to manage digital media as a part of the physical media business, it would never be a priority. The bigger business carried the company after all, and it would always get the most attention.

For this to become one of Amazon’s biggest and most important businesses, Jeff needed Steve, an experienced and proven vice president (now promoted to senior vice president), reporting to Jeff, single-threaded on digital. Steve would in turn need to build a team of senior leaders under him, each of whom would be single-threaded on one aspect of the business, such as device hardware, e-books, music, or video.

The Startup Phase for Amazon Digital Media and Devices To work through the details of our approach to digital books, music, and video, we spent roughly six months researching the digital media landscape and meeting as a leadership team with Jeff on a weekly basis to review and brainstorm countless ideas and concepts.

current state of the digital music business. At the time, it was divided into two camps: in one were services like Napster that facilitated free file sharing; in the other, by itself, was Apple, selling songs to load onto the iPod for 99 cents each. Larry was eager for more big tech companies to enter the business, as that would mean more revenue for Universal Music.

One of the decisions we had to make in that first year was whether to build a business or to buy a company already operating in that space. We had many meetings with Jeff where Steve and I would present our ideas for our music product or a company we might acquire. Each time we had these meetings, Jeff would reject what he saw as copycat thinking, emphasizing again and again that whatever music product we built, it had to offer a truly unique value proposition for the customer. He would frequently describe the two fundamental approaches that each company must choose between when developing new products and services. We could be a fast follower—that is, make a close copy of successful products that other companies had built—or we could invent a new product on behalf of our customers. He said that either approach is valid, but he wanted Amazon to be a company that invents.

The invention approach required the endurance to evaluate and discard many options and ideas. So, as we were considering which path to take—build or buy—we took countless meetings with different companies in the digital media business. In addition to enabling us to understand our options for potential acquisition, it was a productive way for us to get up to speed quickly on different aspects of the digital media business, as the founders and leaders of these companies shared their experience and insights from working on a variety of product challenges. In parallel, we were writing some of our first PR/FAQs for digital media products, which we would review and discuss with Jeff.

It was thanks to the combination of Amazon processes, which we discussed in part one of the book, that Jeff was able to make these changes. For example, the six-page document and S-Team goals allowed Jeff to stay aligned on all major retail and marketplace programs and give feedback in an efficient manner, even as he devoted less calendar time to those businesses. And for new initiatives in Digital (as well as AWS), the PR/FAQ process enabled him to spend weeks or months to gain alignment and clarity at a high level of detail on each project. Once he and the team had aligned on each detailed PR/FAQ, Digital and AWS leaders could then run as hard as possible to build their teams and launch new products, with the knowledge that they were in lockstep with the CEO. This enabled Jeff to direct and influence multiple projects simultaneously. This kind of alignment existed not because Jeff was CEO but because we had a process in place that enabled it. The same process could allow teams at any company to work autonomously and yet be in sync with the intentions of their leaders.

physical media was based on having the broadest selection of items available on a single website. But this could not be a competitive advantage in digital media, where the barrier to entry was low.

Another key element of our competitive advantage in the physical retail business was our ability to offer consistently low prices. If you think back to the flywheel of growth, this was associated with our lower cost structure in comparison to other retailers, because we had no stores. But cost structure was not a factor in digital. The process and costs associated with hosting and serving digital files were basically the same whether you were Amazon, Google, Apple, or a startup.

In physical retail, Amazon operated at the middle of the value chain. We added value by sourcing and aggregating a vast selection of goods, tens of millions of them, on a single website and delivering them quickly and cheaply to customers.

this meant moving out of the middle and venturing to either end of the value chain. On one end was content, where the value creators were book authors, filmmakers, TV producers, publishers, musicians, record companies, and movie studios. On the other end was distribution and consumption of content. In digital, that meant focusing on applications and devices consumers used to read, watch, or listen to content, as Apple had already done with iTunes and the iPod. We all took note of what Apple had achieved in digital music in a short period of time and sought to apply those learnings to our long-term product vision.

we knew nothing about building hardware, Jeff and Steve decided that the place to start was at the consumption end of the chain: hardware, specifically e-books. There were multiple reasons for this. One was that books were still the single largest category at Amazon and the one most associated with the company. Music was the first category to move to digital in the marketplace, but Apple had a big head start and our sessions did not produce a PR/FAQ for a music device or service idea that was sufficiently compelling. Video had not gone digital yet, which seemed like an opportunity, but it became apparent that there were a number of barriers to creating a great video experience at that time. These included getting the rights from the studios to offer their movies and TV shows digitally, the time it would take to download massive video files over the slow (at the time) internet, and uncertainty about how consumers would play these video files on their TVs. Based on these factors, we decided to make a big investment of people and funds in e-books and a reading device and establish much smaller teams to work on music and video.

e-book business as a whole was tiny; there was no good way to read e-books on a device other than a PC, and reading on a PC was definitely not a good experience.

In those early days of digital media, this was a first. At the time, the only way to load content onto an MP3 player or other portable device was to connect it to your PC with a wire and sync the content between the two machines. This process was known as “sideloading.” While it was convenient to be able to take your music with you on your portable device, the sideloading process was a pain for consumers, and we learned from studies that the average consumer would only bother to connect their iPod to their PC once a year. That meant most people walked around without the latest music on their devices. It was known as the “stale iPod” syndrome. Jeff saw this as an opportunity. He wanted the Kindle to be like the BlackBerry—no wires, never a need to connect to your PC. Not only did he want us to eliminate sideloading altogether,

The other key feature we debated was the use of E Ink, a nascent technology. It had been developed in the MIT Media Lab and spun out as a company in 1997, but there were no major commercial applications in 2005. Although Jeff and the team were unified in their desire to use the new E Ink technology,2 we recognized there would be some trade-offs.

These two features—wireless delivery and the E Ink screen—proved to be two of the keys to making the Kindle great. Wireless delivery meant that customers could search, browse, buy, download, and start reading a new book in under 60 seconds. The E Ink screen’s paper-like display meant that, unlike with an iPad, you could read by the pool, and its low power consumption meant you could read throughout a 12-hour plane flight without worrying about the device dying on you. We take these features for granted today, but in those days they were unheard of.

We had already developed the capability for customers to preview several pages of a book they were interested in—first called Look Inside the Book and then improved to Search Inside the Book. We had worked with publishers to manually digitize their books, so we understood the process. And so we launched Kindle, with its connected e-bookstore, with a selection of 90,000 e-books.

We offered selected bestsellers and new releases at $9.99, which was roughly equal to Amazon’s wholesale cost for those e-books. The price for the Kindle device itself was also very close to our cost. And we were absorbing the cost of Whispernet. While we made money on most of the books we sold and our overall margins on e-book sales were positive (even after the publishers raised their wholesale prices in an unsuccessful attempt to force us to raise our $9.99 price on bestsellers and new releases), the early P&L for this business projected little returns in the near term. We were making a big up-front investment in the customer experience, investing some near-term profit in order to get the e-book business and our digital media and devices business off the ground.

And then came Oprah. On October 24, 2008, she devoted an entire episode of her show to Kindle, gushing, “It’s absolutely my new favorite favorite thing in the world.”5 Because millions of viewers looked to Oprah, the “Queen of Reading,” for book recommendations, sales exploded.

Prime

Amazon Prime provided a compelling, game-changing customer experience, and as a result it became the greatest driver of growth for the retail business. But Prime’s journey from idea to launch was an unusual one for Amazon. It did not have a single-threaded leader or team until very late in the process. There was no clear mission statement, and it did not follow the then-nascent Working Backwards process until well after the project was underway. Few Amazonians thought it was a good idea even when it launched.

You’ll see that what really drove the launch of Prime was our realization, after a monthslong deep dive into the data, that our customers’ needs and the capabilities of the fulfillment network we had spent the better part of nine years and $600 million building were not aligned.

There were two options:

  1. Stay the course. The company is still growing. Let’s maximize our return on this multiyear investment we just made to build our fulfillment centers and tweak them to improve along the way. The next batch of quarterly results will reflect that we’re moving in the right direction.
  2. Two-day shipping and eventually one-day and same-day shipping will become the norm. Therefore, while what we’ve built is good, it is not good enough. Buoyed by our “unshakeable conviction that the long-term interests of shareowners are perfectly aligned with the interests of customers,” we should embark on this new journey right now.

need to ruthlessly stick to the simple-to-understand (but sometimes hard-to-follow) principles and process that insist on customer obsession, encourage thinking long term, value innovation, and stay connected to the details.

Amazon customers cared about three main things that we could deliver for them: Price. Is the price low enough? Selection. Does Amazon have a wide range of products—ideally everything? Convenience. Is the product in stock, and can I get it quickly? Can I easily find or discover the product?

Price, selection, and convenience were therefore the inputs for our business. And we could control all three. Every week the senior leaders would review detailed price, selection, and convenience metrics for each product line and challenge the teams if they were falling short along any of those dimensions.

The data we’d collected over the years through many tests just reinforced this. Shipping promotions drove significantly higher growth than any other type of promotion. The perceived value of free shipping was higher than straight discounting of product prices.

Our first attempt was in early 2002, when we launched the Free Super Saver Shipping program for qualifying orders over $99. “Qualifying” meant products that were sold by Amazon rather than by Marketplace sellers, and also products that were not abnormally large or too heavy to ship. Super Saver Shipping was built in much the same way that Amazon Prime was later developed. Both projects started with a decisive move, and an insanely compressed timeline, leading up to a public launch. These timelines were part of the DNA of a company whose very first employee’s job description, as you may recall, made clear that the candidate would have to accomplish large and complex tasks in “one-third the time that most competent people think possible.”

With Super Saver Shipping, the order would leave the fulfillment center within three to five days of being placed, and would be carried to its destination by a ground delivery service. This enabled Amazon to keep its costs low, since no flights were involved. It also made it possible for Amazon to group items together—ones that might have been ordered separately or were not all immediately available from a single fulfillment center—which reduced the total number of packages shipped.

Even though Super Saver Shipping made sense for Amazon’s supply chain and was a popular feature, we realized that it could not be the driver of significant growth for the retail business. First, this was because many of our heaviest buyers needed the fastest possible delivery—they were not willing to wait three to five days for an item to ship. Second, some of our price-sensitive customers were not willing to up their order to $25 just to qualify for Super Saver Shipping.

One way in which we tracked our shipping performance was with a metric called “Click to Deliver.” This was the total amount of time from the moment the customer placed an order (click) to the moment the package arrived at its final destination (deliver). We divided the process into two segments. The first—the click-to-ship time—was the amount of time required for Amazon to process the order, package it, and hand it over to a third-party delivery service. The second segment—the ship-to-deliver—was the time between the handover and the customer receiving their package.

Click-to-ship was the part of the process that we could control,

So the catch was that “fast and free” was where Amazon needed to go next, but our fulfillment capabilities were not up to the task.

Loyalty Programs So, we brainstormed solutions to the fundamental shipping problem. Our marketing, retail, and finance teams set three criteria that any new marketing initiative would have to meet to go forward: It had to be affordable (an eye-catching but financially unsustainable approach was out of the question). It had to drive the right customer behavior (that is, nudge customers to buy more from Amazon). It had to be a better use of funds than the obvious alternative, which was to invest those same funds into actions that would improve the customer experience, such as lowering prices even further or improving our in-stock rate.

The marketing and retail teams analyzed several variations of loyalty programs, including free standard shipping for orders over $25 (which was essentially Super Saver Shipping but without the three-to-five-day click-to-ship time), free shipping on all preorders (that is, an order placed before the item’s official first ship date), paying an annual fee for free standard shipping, or free two-day shipping. We also considered an alternate form of loyalty program that would include different combinations of purchases of our “owned inventory” (items we stocked in our fulfillment centers) and those of third-party items, where we would have to subsidize shipping costs or require third-party sellers to do so. We even evaluated rebates and points-based programs similar to the airlines’, but there’s an important difference between airlines and retailers. Once a plane takes off, its empty seats have no value. Therefore, airlines, in exchange for loyalty, can give away marginal inventory that would otherwise go unsold. Whereas in retail, giving away either product or shipping fees always has a cost. None of the ideas made it very far because they could not meet the three essential criteria.

The “institutional no” is a big reason why Amazon could have made an error of omission in this case. Jeff and other Amazon leaders often talk about the “institutional no” and its counterpart, the “institutional yes.” The institutional no refers to the tendency for well-meaning people within large organizations to say no to new ideas. The errors caused by the institutional no are typically errors of omission, that is, something a company doesn’t do versus something it does. Staying the current course offers managers comfort and certainty—even if the price of that short-term certainty is instability and value destruction later on.

Walking the Store Most retail CEOs walk the store when they have a chance, and Jeff is no exception. The typical CEO will pay a visit to a retail outlet when they’re in the area—often unannounced, or even incognito—


Prime Video

Let me explain how our service worked, or, I should say, sort of worked. First, you would go to the Amazon website to get the Amazon Unbox application, download it, and install it on your PC. I say PC because if you were a Mac user, you were out of luck—Unbox only ran on Windows machines, and only on Windows machines less than three years old. And even if you had a PC, the installation process was frustratingly slow. However, if you got the app installed, you could then go on the Amazon website and select a movie for download. That was where Unbox ran into more trouble. In 2005, because streaming high-quality video was not yet possible, you had to download the movie to your hard drive before you could start watching. How long did that take? Well, a clever customer would take their personal laptop to the office where they would have access to what was then considered a “high-speed” network. Even with that, it would take an hour or two to download a two-hour movie. For those who forgot to do an office download, or didn’t have access to a high-speed network, the process would take considerably longer. With a DSL connection, the standard of the day, it could take as long as four hours to get the job done.

We abandoned the burner approach and developed a feature called RemoteLoad. It enabled you to browse the Amazon site on any computer—it didn’t have to be the one you were going to watch it on—purchase a title, and initiate the download so the movie would be available for viewing on your computer of choice whenever you were ready. The shortcoming was, if you wanted to watch the movie on, say, your home PC, that PC had to be powered up, the Unbox app had to be open, and the machine had to be connected to the internet. Very few customers would take the trouble to do all that.

This was directly antithetical to the notion of focusing on the customer, not the competitor. We had conducted an internal employee-only beta test, but we failed to use the results as an opportunity to slow down, carefully review the customer feedback, and take the time needed to make real changes to improve the quality of the customer experience. We were just focused on shipping. We had prioritized speed, press coverage, and competitor obsession over the customer experience. We had been decidedly un-Amazonian.

We had prioritized speed, press coverage, and competitor obsession over the customer experience. We had been decidedly un-Amazonian.

“Why would I fire you now? I just made a million-dollar investment in you. Now you have an obligation to make that investment pay off.

The Issue of Rights Now we had to figure out how to fix what we had so poorly wrought. The fact of the matter was that Unbox was boxed in on all sides: by our competitors, particularly Apple; by our reliance on Microsoft for media playback and PCs running Windows; and by our suppliers, the movie studios. A key issue was the use of digital rights management software, or DRM, to control the download of proprietary content and prevent theft, sharing, and reuse by customers. Apple had developed its proprietary DRM software, called FairPlay, that ensured secure content download, and Apple had deals in place with the major content producers. The only way for us to enable our customers to download and play movies on Macs and iPods was to use FairPlay DRM.

little clause buried deep in decades-old contracts between the motion picture studios and the major pay TV channels such as HBO, Showtime, and Starz—the “blackout window” clause. The clause stated that when a new movie became available on DVD from a studio, we had a clearly defined window—usually 60 to 90 days—during which we could digitally sell or rent the title. After that came the blackout window, a period usually lasting three years, during which the pay TV channels had the exclusive rights to air the movies, and we were not allowed to digitally rent or sell them on our service.

The download still took time, but it had a feature called progressive download. As soon as the runtime of the downloaded content exceeded the time it would take to download the rest of the movie, you could start watching. It wasn’t real streaming, but it did speed things up.

In retrospect, it seems obvious that the Netflix launch was a significant threat, because streaming plus subscription would prove to be the magic combination in the digital video business. And they were smart and savvy about how they launched it by including the streaming titles as free/included for DVD subscribers.

there was one important way that the world of digital media was the same as the old, analog media world: there was still a great advantage to be had in control. In the old media world, you could control one of two things: the method of distribution of the content or the content itself (or in some cases both). Broadcast networks like NBC and CBS controlled their networks and also developed exclusive content such as TV shows, sports events, and news broadcasts. Studios like Warner and Disney created movies and shows. In the new digital media world, broadcast networks and studios would lose their control on distribution, to be replaced by applications on internet-connected devices.

As time went on, we realized that Amazon Video On Demand was stuck in the middle of the value chain—the valley, really. We didn’t control the upstream end of content development. We didn’t control the downstream end of playback devices. We were essentially a digital distribution system, with nothing unique or proprietary about it.

So, beginning in 2010, we put our resources into a number of new initiatives designed to get us out of the middle: Prime Instant Video, Amazon Studios, and new Amazon devices—Fire Tablet, Fire Phone, Fire TV, and Echo/Alexa.

There is a difficult chicken-and-egg problem with a subscription service. You need to have a great offering to attract paying subscribers. To be able to afford a great offering, you need a lot of paying subscribers. It’s a challenging cold-start problem that generally requires a large up-front investment, which you can hopefully pay back with subscriber growth in future years. Jeff argued that even if we offered streaming videos to Prime members at no additional cost, the business could still be profitable in the long run.

How? A streaming subscription service is a fixed-cost business. When Netflix licensed a movie or TV series from a studio, they paid a fixed fee. The amount was not based on usage. Netflix customers could watch the video once or ten million times, the costs were the same. Yes, there were some variable costs involved, for bandwidth and servers, but these costs amounted to pennies per view. And, as with most technology, those costs declined over time. The cost structure is very different from the DVD rental-by-mail business, where the costs—warehouses, wages, shipping, replacement discs—are variable.

The major benefit of establishing a popular subscription service with a fixed-cost base is that once you exceed a certain number of subscribers, every new dollar of subscription revenue is pure profit. The hard parts of pulling off this strategy are (a) acquiring a large number of subscribers, and (b) building a catalog of must-see movies and TV series. By integrating Prime Video into the already large and growing Prime customer base we had a leg up on solving the first problem. We were less concerned about the poor initial selection because our time horizon for success was measured in years. We were confident that, given time, we could make the right investments and assemble a great selection of movies and TV series.

The “oh-by-the-way” addition would become a “gotta-have” benefit.

To establish Amazon as a distinct product in every business category and every market was incredibly difficult to do.

Any competitor might launch a Prime shipping clone, or they could potentially build a new Netflix-type service, but it was unlikely that any one of them would be able to do both.

the acquisition of a European movie and TV subscription service called LOVEFiLM. It was essentially the Netflix of Europe, offering DVD rental by mail as well as streaming films and TV shows. LOVEFiLM would help us get a jump on Netflix, which had not launched in Europe at the time.

Meanwhile, as we were figuring out how to navigate to the upstream end of the value chain and finding our way through the LOVEFiLM acquisition, we were also working to establish a presence at the other end—consumption and playback. For that, we needed to create our own hardware offerings,

The first device out of the gate was the Kindle Fire Tablet. Kindle Fire Tablet quickly gained a meaningful share of the market and got Amazon a secure toehold at the video playback end of the value chain. Just shy of a year after launch, in September 2012, Kindle Fire Tablet had sold millions of units and was the second-bestselling tablet after the iPad.

After the success of Fire Tablet, the Amazon Devices organization, now led by Dave Limp, began to develop so many new offerings

Fire TV launched in April 2014 at $99, with a number of features that improved the customer experience.

getting Amazon Studios off the ground was one of the fastest new business creation tasks I had during my time at Amazon. This is largely because of the particular and distinct nature of the entertainment industry. Unlike the software and hardware engineering talent pool, which is limited and in high demand, there is a large talent pool of producers, directors, actors, and craftspeople. A small percentage of them are full-time employees in any kind of organization. Most are independent, freelance contractors. Engagements are relatively short term. Scripts too are in virtually endless supply, although, as we’d learned, the percentage of great ones is small. All it really takes to get a production

finding, selecting, and sometimes competing for, the best scripts to greenlight. To solve that challenge, we opened an office in Santa Monica and hired a team of development executives, each of whom had a focus on a specific content genre: comedy, drama, kids.

Before House of Cards, most A-list Hollywood players wanted nothing to do with online productions. Such things were beneath them, just as appearing in advertising had once been seen as low class. But Spacey was willing to take that risk, and he and Netflix broke through the barrier.

The development team was smart and focused in their pursuit of the best scripts that would appeal to Amazon viewers based on years of viewership data. We greenlighted five comedies and five kids shows for pilots (Jeff was involved in the selection). That meant we would produce ten pilot shows, most of them costing several million dollars to create. We did add one interesting new wrinkle. We made all the pilots available to view for free on Amazon before making a decision as to which to greenlight. Through this process, we were able to gather viewership data and ratings and reviews from real customers in order to make better-informed decisions about which shows would attract the most viewers.


AWS

customers are divinely discontented, and “yesterday’s ‘wow’ quickly becomes today’s ‘ordinary.’”

we took a step back, placed ourselves in our affiliates’ shoes, and looked at the problem from their perspective. We’d been operating on the correct assumption that the big attraction of the program for affiliates was the Amazon products themselves, but in so doing we’d overlooked their desire to have choices around the look and feel of the display—for instance, the font size, color palette, or image size. Turns out they didn’t want to settle for the “best available” Amazon format.

in March 2002, we decided to take a chance and launch an experimental feature that changed the way we shared information with the affiliates. Instead of receiving a fully formed product display, the affiliates could choose to receive the product data in a text format called XML. The affiliates would then take that XML product data and write their own software code to incorporate it into their websites according to their own design standards. The goal was for us to get out of the design business so they could innovate without us holding them back.

This new feature was different. It was aimed at a technical audience, affiliates who had software developers on their teams who knew how to write code that transforms the product data XML into something that looks good on their website. We had to create new elements such as user manuals, technical specifications, and sample code all rolled into a software developer kit (SDK) to show them how the system worked.

We had to create new elements such as user manuals, technical specifications, and sample code all rolled into a software developer kit (SDK) to show them how the system worked.

We had painstakingly built our catalog of tens of millions of products, which also contained valuable data about consumer behavior toward those products, and many in the company viewed this catalog as a competitive asset not to be shared. On the Associates team, however, we felt that the benefits of letting hundreds of thousands of developers build commerce solutions on top of this data outweighed the potential risks.

In July 2002, we launched the very first version of Amazon Web Services. If the product data XML we had sent to affiliates a few months earlier was the beta, AWS was the 1.0. It included some search and shopping capabilities and a full software development kit, and it was available to anyone, not just affiliates. Also, it was still free. For this one, we did issue a press release, in which Jeff said:

Up until this point Amazon had two sets of customers—buyers and sellers. Now we had a new customer set—the software developer.

our biggest customers were not affiliates and not outsiders of any kind. They were Amazon software engineers. They found Amazon Web Services easier to use than some of our existing internal software tools they had been working with to build amazon.com.

The Primitives Are Known, They Just Haven’t Been Exposed as Web Services For several decades, well-established hardware and software companies had built and sold capable solutions for a well-known set of problems inherent in building commercial software—storage (databases used to save and retrieve data), message queueing, and notifications (the latter two are different methods computer processes use to communicate with one another). If a software developer needed to implement one of these building blocks, they would have to buy a software license that would typically incur a nontrivial one-time cost plus yearly maintenance fees for however long the product was in use. Moreover, they would either have to buy hardware and run it in their own data center or pay a partner to do it. We didn’t have to invent these building blocks—or “primitives” as they have been called—we just had to figure out how to offer them in the cloud as a web service. For instance, if you want to use Amazon’s S3 storage service, all you need to do is sign up for a free account and provide a credit card. After a few lines of code to set up your own storage area (called provisioning), you can start storing and retrieving data. You then pay only for what you use, which means there is no time-consuming vendor-selection process and no cost negotiation (the list prices of many corporate software licenses were just the starting point in a negotiation). And you don’t have to secure computers and a data center to run your new database. The cloud provider, in this case Amazon, handles all that.

“undifferentiated heavy lifting,” that is, the tasks that we could do for companies that would enable them to focus on what made them unique.

Server-Side Was Easy for Us and Hard for Most Everyone Else Another factor that influenced our decision to offer a broader set of services was that, in building and operating one of the world’s largest websites, we had acquired a core competency only a few companies could match. We had the capability to store massive amounts of data, perform computations on that data, and then quickly and reliably deliver the results to end users, be they humans or computers. Suppose, for example, that you want to build a service that stores millions of photos to be searched and queried by millions of customers. In 2002, that would have been a reasonably large but very doable project for Amazon. That pretty much describes, in fact, our Search Inside the Book capability. For most companies, however, such a project would have been cost and time prohibitive. But it was clear that more and more companies would develop or acquire these capabilities, and they eventually would become an undifferentiated commodity.

the PR/FAQ stated that we wanted the student in a dorm room to have access to the same world-class computing infrastructure as any Amazon software engineer.

AWS as It Started So what happened next? Basically, the first part of the race consisted of many months of iterating on the Working Backwards PR/FAQ process and going through the Bar Raiser process one candidate at a time as fast as we could to start building out the teams.

only two out of the first set of about a half-dozen services were runaway successes—Amazon S3 (Simple Storage Service) and Amazon EC2 (Elastic Compute Cloud). Jeff and I would meet with Andy and the leaders of these teams every two weeks, sometimes more often. There was also a large team that was building out the infrastructure that all these services would use. This infrastructure consisted of components such as metering, billing, reporting, and other shared functions.

Though the initial roadmap of primitives was relatively straightforward, what wasn’t so easy was figuring out how to build them so that they could operate on a scale several orders of magnitude greater than what we were doing for the Amazon retail business.

The Working Backwards process is all about starting from the customer perspective and following a step-by-step process where you question assumptions relentlessly until you have a complete understanding of what you want to build. It’s about seeking truth.

The cost of changing course in the PR/FAQ writing stage is much lower than after you’ve launched and have an operating business to manage. The Working Backwards process tends to save you from the expensive proposition of making a significant course change after you’ve launched your product. One

In the FAQ there was a simple question that read something like, “How much does S3 cost?”

One of the first versions of the answer was that S3 would be a tiered monthly subscription service based on average storage use, with a possible free tier for small amounts of data.

We kept discussing this question. We really did not know how developers would use S3 when it launched. Would they store mostly large objects with low retrieval rates? Small objects with high retrieval rates? How often would updates happen versus reads? How many customers would need simple storage (can easily be re-created, stored in only one location, not a big deal if you lose it) and how many would need complex storage (bank records, stored in multiple locations, a very big deal if you lose it)?

Thus, the discussion moved away from a tiered subscription pricing strategy and toward a cost-following strategy. “Cost following” means that your pricing model is driven primarily by your costs, which are then passed on to your customer. This is what construction companies use, because building your customer’s gazebo out of redwood will cost you a lot more than building it out of pine. If we were to use a cost-following strategy, we’d be sacrificing the simplicity of subscription pricing, but both our customers and Amazon would benefit. With cost following, whatever the developer did with S3, they would use it in a way that would meet their requirements, and they would strive to minimize their cost and, therefore, our cost too. There would be no gaming of the system,

Would the most important cost drivers for S3 be the cost of storing data on the disk? The bandwidth costs of moving the data? The number of transactions? Electrical power? We finally settled on storage and bandwidth.

An example in the early days where we did not know the resources required to serve certain usage patterns was with S3: We had assumed that the storage and bandwidth were the resources we should charge for; after running for a while, we realized that the number of requests was an equally important resource. If customers have many tiny files, then storage and bandwidth don’t amount to much even if they are making millions of requests.


Conclusion

“How do I start? Where do I start? What do I actually do to bring some of the aspects of being Amazonian into my business?”