# In the Plex ## Metadata * Author: [Steven Levy](https://www.amazon.comundefined) * ASIN: B003UYUP6M * ISBN: 1416596593 * Reference: https://www.amazon.com/dp/B003UYUP6M * [Kindle link](kindle://book?action=open&asin=B003UYUP6M) ## Highlights On his desk and permeating his conversations was Apple interface guru Donald Norman’s classic tome The Psychology of Everyday Things, the bible of a religion whose first, and arguably only, commandment is “The user is always right.” (Other — location: [254](kindle://book?action=open&asin=B003UYUP6M&location=254) ^ref-57217 --- biography of Nikola Tesla, the — location: [257](kindle://book?action=open&asin=B003UYUP6M&location=257) ^ref-60713 --- But the information Google began gathering was far more voluminous, and the company received it for free. Google came to see that instant feedback as the basis of an artificial intelligence learning mechanism. “Doug Lenat did his thing by hiring these people and training them to write things down in a certain way,” says Peter Norvig, who joined Google as director of machine learning in 2001. “We did it by saying ‘Let’s take things that people are doing naturally.’” On the most basic level, Google could see how satisfied users were. To paraphrase Tolstoy, happy users were all the same. The best sign of their happiness was the “long click”—this occurred when someone went to a search result, ideally the top one, and did not return. That meant Google had successfully fulfilled the query. But unhappy users were unhappy in their own ways. Most telling were the “short clicks” where a user followed a link and immediately returned to try again. “If people type something and then go and change their query, you could tell they aren’t happy,” says Patel. “If they go to the next page of results, it’s a sign they’re not happy. You can use those signs that someone’s not happy with what we gave them to go back and study those cases and find places to improve search.” — location: [973](kindle://book?action=open&asin=B003UYUP6M&location=973) ^ref-6793 --- But the information Google began gathering was far more voluminous, and the company received it for free. Google came to see that instant feedback as the basis of an artificial intelligence learning mechanism. “Doug Lenat did his thing by hiring these people and training them to write things down in a certain way,” says Peter Norvig, who joined Google as director of machine learning in 2001. “We did it by saying ‘Let’s take things that people are doing naturally.’ ” On the most basic level, Google could see how satisfied users were. To paraphrase Tolstoy, happy users were all the same. The best sign of their happiness was the “long click”—this occurred when someone went to a search result, ideally the top one, and did not return. That meant Google had successfully fulfilled the query. But unhappy users were unhappy in their own ways. Most telling were the “short clicks” where a user followed a link and immediately returned to try again. “If people type something and then go and change their query, you could tell they aren’t happy,” says Patel. “If they go to the next page of results, it’s a sign they’re not happy. You can use those signs that someone’s not happy with what we gave them to go back and study those cases and find places to improve search.” — location: [887](kindle://book?action=open&asin=B003UYUP6M&location=887) ^ref-40055 --- that Jeff Dean and Sanjay Ghemawat had developed to compress data so that Google could put its index into computer memory instead of on hard disks. That was a case where a technical engineering project meant to speed up search queries enabled a totally different kind of innovation. — location: [985](kindle://book?action=open&asin=B003UYUP6M&location=985) ^ref-56078 --- But there were obstacles. Google’s synonym system came to understand that a dog was similar to a puppy and that boiling water was hot. But its engineers also discovered that the search engine considered that a hot dog was the same as a boiling puppy. The problem was fixed, Singhal says, by a breakthrough late in 2002 that utilized Ludwig Wittgenstein’s theories on how words are defined by context. — location: [996](kindle://book?action=open&asin=B003UYUP6M&location=996) ^ref-33480 --- There were the familiar “ten blue links” of Google search. (The text consisting of the actual links to the pages cited as results was highlighted in blue.) Early — location: [1055](kindle://book?action=open&asin=B003UYUP6M&location=1055) ^ref-4851 --- “International Googlenomics.” — location: [2495](kindle://book?action=open&asin=B003UYUP6M&location=2495) ^ref-18135 --- IBM and the Holocaust, which — location: [5903](kindle://book?action=open&asin=B003UYUP6M&location=5903) ^ref-45639 --- Montessori training.”) — location: [6813](kindle://book?action=open&asin=B003UYUP6M&location=6813) ^ref-62255 ---