The Machine, in other words, restricts the machinegoer and does the dreaming on his behalf; to echo Guy Debord, the user’s dreams are no longer his own but rather the dreams of someone else who represents them back to him.
Monday, May 14, 2018
Sunday, May 13, 2018
What was awesome about the trip? Be as detailed as you'd like. What should we do differently the next time around? Be as detailed as you'd like. In retrospect, did you like the 3 day hackathon structure? What was good? What could be improved? We had a handful of roundtables this retreat. What was good about these? What could be improved? How did your team day go? Any other comments?
Thursday, May 10, 2018
A new study by internet radio service Pandora shows that too many ads can motivate users to pay for an ad-free version, but push many more to listen less or abandon the service. The study found that the additional subscription revenue does not make up for the lost ad revenue from those who listen less or leave the service.
"I'm actually a big fan of anecdotes in business," Bezos said at the leadership forum as he explained why he reads customer emails and forwards them to the appropriate executive. Often, he says, the customer anecdotes are more insightful than data.
I wrote an article last year titled, Google's CEO Doesn't Use Bullet Points and Neither Should You. He still doesn't. Neither do Jeff Bezos, Elon Musk, Richard Branson, and most of the world's most inspiring speakers. Bullets don't inspire. Stories do.
Wednesday, May 09, 2018
Making hardware runs counter to Google’s entire corporate culture. The company shuns process and management, two things a hardware maker can’t do without.
For the new hardware team, the task was clear: Find more ways to get Google Assistant in front of people and build a sustainable business around it
Osterloh centralized all that hardware under his leadership, giving 55 percent of those 1,000 employees a new manager. Rather than having an executive in charge of each product, Osterloh chose to implement a “functional” structure, giving his leaders oversight of a larger segment of the Google hardware organization. Ivy Ross, formerly head of Google Glass, was put in charge of all hardware design. Mario Queiroz ran product management. Ana Corrales, a longtime manufacturing exec and Nest’s CFO and COO, was tapped to oversee all things operations and supply chain. The team began to centralize their planning and forecasting, and to streamline their conversations with suppliers. They made five-year plans, which were anathema to Google.
Ship and Iterate simply doesn’t work with hardware. A single tweak can cost weeks and millions of dollars. Every small change ripples through the entire supply chain, changing vendor timelines, requiring new tools, and slowing everything down. If one part is late, you’ll miss your ship date, and it’s not like you can move Black Friday. Oh, you want 50 percent more product than you thought? You’ll get it in six months if you’re lucky. There is no bending the hardware world to your whim.
He went through every hardware initiative at Google, choosing which to continue and which to wind down. None of the decisions was easy, Osterloh says, but two were particularly hard. He’d been around the Ara modular phone project since its beginnings at Motorola and believed fully in its mission: to build a $50 phone with upgradeable parts, which could last longer and be greener than any other device. Yet the device ended up being less modular and more expensive than anyone wanted. “So it was rather like every other phone, except there was an ability to add up to six or so modules to the back,” Osterloh says. He wanted to build one phone, not many, so he shut Ara down. With Google Glass, too, Osterloh understood the vision but couldn’t figure out how to achieve it quickly. He ticks off the things you’d need to make a great face-worn augmented-reality device that aren’t yet possible: longer-lasting batteries in smaller packages, faster processors that generate less heat, and a populace ready to use such devices. “In the long run, this is going to be a key part of what we do,” he says. “But the timing is a key uncertainty.” In the meantime, Osterloh re-released Glass as an enterprise tool, where it found a surprising niche with factory workers and warehouse employees. While he was re-writing org charts and culling product lines, Osterloh had also been working with the higher-ups at Google to figure out what, exactly, Google’s hardware strategy should entail. They coined platitudes like “radical helpfulness” and sought ways to communicate humanity and approachability, but mostly they focused on three words, in a very specific order: AI, software, hardware
Osterloh decided to flank the Pixel effort with other devices that were good matches for Assistant. Another team within Google had in the past released two terrific laptops, called Chromebook Pixel, that only saw limited commercial success. Osterloh told the team to go build something even lighter, thinner, and better—and to integrate Assistant. They decided to call it Pixelbook and set off on their way. A different group started working on headphones they called Pixel Buds that would provide access to Assistant without the need for a phone. The Google Home team and the Chromecast crew were also part of the push. “Eventually, it will be the case that users probably have a constellation of devices to get things done,” Osterloh says.
Tuesday, May 08, 2018
Microsoft Charts Its Own Path on Artificial Intelligence https://www.wired.com/story/microsoft-charts-its-own-path-on-artificial-intelligence/ "Instead, Microsoft is pitching the idea of running AI projects atop chips called FPGAs, whose designs can be reprogrammed to support new forms of software on the fly. That allows Microsoft to avoid having to dabble in designing silicon for its servers—it buys its FPGAs from chip giant Intel. Some companies are already buying into Microsoft’s vision." "Microsoft says customers taking advantage of Project Brainwave can process a million images for just 21 cents using a standard image recognition model, and that a single image can be processed in just 2.8 milliseconds. The company claims that's better than any rival cloud service today, but the true competitiveness of Microsoft's technology won't be apparent until outsiders have had a chance to test it head-to-head against other options. Those include Google's TPUs, and graphics chips from Nvidia, the leading supplier to machine-learning projects. It’s also unclear how widely applicable Microsoft's Brainwave will prove to be. FPGAs aren’t widely used in cloud computing, and so most companies don’t have the expertise needed to program them. " "Some Google servers now include chips customized for machine learning called TPUs that the company designed in-house to deliver better power and efficiency. Google has also started renting them out to its cloud-computing customers. Facebook recently said it is interested in designing similar chips for its own data centers. Both are competing with a pack of startups crafting their own AI server chips, which collectively received hundreds of millions of dollars in funding last year."
The Great A.I. Awakening https://www.nytimes.com/2016/12/14/magazine/the-great-ai-awakening.html " It would capture context — and something akin to meaning." " three overlapping stories that converge in Google Translate’s successful metamorphosis to A.I. — a technical story, an institutional story and a story about the evolution of ideas. The technical story is about one team on one product at one company, and the process by which they refined, tested and introduced a brand-new version of an old product in only about a quarter of the time anyone, themselves included, might reasonably have expected. The institutional story is about the employees of a small but influential artificial-intelligence group within that company, and the process by which their intuitive faith in some old, unproven and broadly unpalatable notions about computing upended every other company within a large radius. The story of ideas is about the cognitive scientists, psychologists and wayward engineers who long toiled in obscurity, and the process by which their ostensibly irrational convictions ultimately inspired a paradigm shift in our understanding not only of technology but also, in theory, of consciousness itself." "You would give the machine a language map that was, as Borges would have had it, the size of the territory. This perspective is usually called “symbolic A.I.” — because its definition of cognition is based on symbolic logic — or, disparagingly, “good old-fashioned A.I.”" "The Google of the future, Pichai had said on several occasions, was going to be “A.I. first.” What that meant in theory was complicated and had welcomed much speculation. What it meant in practice, with any luck, was that soon the company’s products would no longer represent the fruits of traditional computer programming, exactly, but “machine learning.”" "two main problems with the old-fashioned approach. The first is that it’s awfully time-consuming on the human end. The second is that it only really works in domains where rules and definitions are very clear: in mathematics, for example, or chess. " "attitude toward artificial intelligence was evolutionary rather than creationist" "Unlike Searle, they don’t assume that “consciousness” is some special, numinously glowing mental attribute — what the philosopher Gilbert Ryle called the “ghost in the machine.” They just believe instead that the complex assortment of skills we call “consciousness” has randomly emerged from the coordinated activity of many different simple mechanisms" "He dashed off his own Japanese interpretation of the opening to Hemingway’s “The Snows of Kilimanjaro,” then ran that passage back through Google into English. He published this version alongside Hemingway’s original, and proceeded to invite his readers to guess which was the work of a machine. NO. 1: Kilimanjaro is a snow-covered mountain 19,710 feet high, and is said to be the highest mountain in Africa. Its western summit is called the Masai “Ngaje Ngai,” the House of God. Close to the western summit there is the dried and frozen carcass of a leopard. No one has explained what the leopard was seeking at that altitude. NO. 2: Kilimanjaro is a mountain of 19,710 feet covered with snow and is said to be the highest mountain in Africa. The summit of the west is called “Ngaje Ngai” in Masai, the house of God. Near the top of the west there is a dry and frozen dead body of leopard. No one has ever explained what leopard wanted at that altitude. Even to a native English speaker, the missing article on the leopard is the only real giveaway that No. 2 was the output of an automaton. Their closeness was a source of wonder to Rekimoto, who was well acquainted with the capabilities of the previous service. Only 24 hours earlier, Google would have translated the same Japanese passage as follows: Kilimanjaro is 19,710 feet of the mountain covered with snow, and it is said that the highest mountain in Africa. Top of the west, “Ngaje Ngai” in the Maasai language, has been referred to as the house of God. The top close to the west, there is a dry, frozen carcass of a leopard. Whether the leopard had what the demand at that altitude, there is no that nobody explained." "the output is just a prediction based on patterns of patterns, it’s not going to be perfect, and the machine will never be able to define for you what, exactly, a cat is. It just knows them when it sees them. This wooliness, however, is the point." "As of the previous weekend, Translate had been converted to an A.I.-based system for much of its traffic, not just in the United States but in Europe and Asia as well: The rollout included translations between English and Spanish, French, Portuguese, German, Chinese, Japanese, Korean and Turkish. The rest of Translate’s hundred-odd languages were to come, with the aim of eight per month, by the end of next year. The new incarnation, to the pleasant surprise of Google’s own engineers, had been completed in only nine months. The A.I. system had demonstrated overnight improvements roughly equal to the total gains the old one had accrued over its entire lifetime." "Logical reasoning, on this account, is seen as a lucky adaptation; so is the ability to throw and catch a ball. Artificial intelligence is not about building a mind; it’s about the improvement of tools to solve problems. As Corrado said to me on my very first day at Google, “It’s not about what a machine ‘knows’ or ‘understands’ but what it ‘does,’ and — more importantly — what it doesn’t do yet.”" "There has always been another vision for A.I. — a dissenting view — in which the computers would learn from the ground up (from data) rather than from the top down (from rules)." "He fed the neural network a still he had taken from YouTube. He then told the neural network to throw away some of the information contained in the image, though he didn’t specify what it should or shouldn’t throw away. The machine threw away some of the information, initially at random. Then he said: “Just kidding! Now recreate the initial image you were shown based only on the information you retained.” It was as if he were asking the machine to find a way to “summarize” the image, and then expand back to the original from the summary. If the summary was based on irrelevant data — like the color of the sky rather than the presence of whiskers — the machine couldn’t perform a competent reconstruction. Its reaction would be akin to that of a distant ancestor whose takeaway from his brief exposure to saber-tooth tigers was that they made a restful swooshing sound when they moved. Le’s neural network, unlike that ancestor, got to try again, and again and again and again. Each time it mathematically “chose” to prioritize different pieces of information and performed incrementally better. A neural network, however, was a black box. " "The cat paper showed that machines could also deal with raw unlabeled data, perhaps even data of which humans had no established foreknowledge." "It is important to note, however, that the fact that neural networks are probabilistic in nature means that they’re not suitable for all tasks. It’s no great tragedy if they mislabel 1 percent of cats as dogs, or send you to the wrong movie on occasion, but in something like a self-driving car we all want greater assurances. This isn’t the only caveat. Supervised learning is a trial-and-error process based on labeled data. The machines might be doing the learning, but there remains a strong human element in the initial categorization of the inputs. If your data had a picture of a man and a woman in suits that someone had labeled “woman with her boss,” that relationship would be encoded into all future pattern recognition. Labeled data is thus fallible the way that human labelers are fallible. If a machine was asked to identify creditworthy candidates for loans, it might use data like felony convictions, but if felony convictions were unfair in the first place — if they were based on, say, discriminatory drug laws — then the loan recommendations would perforce also be fallible." "The simplest description of a neural network is that it’s a machine that makes classifications or predictions based on its ability to discover patterns in data. With one layer, you could find only simple patterns; with more than one, you could look for patterns of patterns. Take the case of image recognition, which tends to rely on a contraption called a “convolutional neural net.” (These were elaborated in a seminal 1998 paper whose lead author, a Frenchman named Yann LeCun, did his postdoctoral research in Toronto under Hinton and now directs a huge A.I. endeavor at Facebook.) The first layer of the network learns to identify the very basic visual trope of an “edge,” meaning a nothing (an off-pixel) followed by a something (an on-pixel) or vice versa. Each successive layer of the network looks for a pattern in the previous layer. A pattern of edges might be a circle or a rectangle. A pattern of circles or rectangles might be a face. And so on. This more or less parallels the way information is put together in increasingly abstract ways as it travels from the photoreceptors in the retina back and up through the visual cortex. At each conceptual step, detail that isn’t immediately relevant is thrown away. If several edges and circles come together to make a face, you don’t care exactly where the face is found in the visual field; you just care that it’s a face." "You just need lots and lots of the voters — in order to make sure that some part of your network picks up on even very weak regularities, on Scottish Folds with droopy ears, for example — and enough labeled data to make sure your network has seen the widest possible variance in phenomena."
Sunday, May 06, 2018
Landon Starr, the head of data science at Clearlink, which uses machine learning to help companies understand consumer behavior. “Although this technology isn’t spot-on quite yet, AI-powered predictions are likely stronger than the human calculations used in the past."
to research on applied machine learning
Even the most data-driven pro sports franchises still need good athletic chemistry to win titles. Moneyball methods might’ve made the Oakland A’s a surprise regular season success, but the team that really showed what advanced stats could do when combined with unpredictable human elements was the 2004 Boston Red Sox, which leveraged players’ scorecards, how they handled pressure, and their team chemistry to win a World Series. If Hollywood wants to do something similar with AI and analytics, it should also understand that it matters who’s wielding that data, and how—and that the best movies are still by people, about people, and for people.
Unlike what is happening at startups outside Hollywood, Legendary’s methods aren’t about predicting the potential success of a movie before it’s made. Instead they’re about optimizing the success of projects already in development, using analytics to decide when and how to release teasers and trailers, and determining how to customize impressions for different potential audiences, even scoring potential moviegoers in terms of their likelihood of attending a specific film
Vault, an Israeli startup founded in 2015, is developing a neural-network algorithm based on 30 years of box office data, nearly 400,000 story features found in scripts, and data like film budgets and audience demographics to estimate a movie’s opening weekend. The company is only a couple years in, but founder David Stiff recently told Fortune that roughly 75 percent of Vault’s predictions “come ‘pretty close’” to films’ actual opening grosses.
While most studios still rely on the traditional methods to put butts in seats—billboards, TV commercials, press junkets for big stars—one production company is also looking to big data to improve how it markets its biggest movies, and who it markets those movies to. Legendary Entertainment, the studio behind movies like Godzilla and Warcraft, brought on Matthew Marolda in 2013 to be its chief analytics officer. Using his background in sports analytics and marketing, Marolda determined Legendary was not gathering the right data about its potential audience and that better information could give Legendary a leg up on studios using traditional methods.
marketing using data
Saturday, May 05, 2018
Here’s how it works: In 30-minute, once-a-week action-packed sessions that employ intensity, repetition and novelty, specially trained coaches engage kids one-on-one or in small groups with amped-up flash cards and a physical regimen akin to gymnastics, including ball skills and fine motor development. Equal time is given to the cards and exercises – a purposeful division, Hannan says, citing research that recommends participating in physical activity before learning or in combination with learning to enhance retention and memory. Parents also participate, helping “coach” their child. All of the card sets captivate the visual, auditory and motor senses simultaneously – a trifecta that has also been shown to aid learning. The cards target different skills such as attentiveness or pre-math ability by presenting the various ways amounts can be seen increasing and decreasing (with familiar images such as dogs and cats), all presented with catchy phrases spoken at varying volumes such as “BIG, little.”
Galanter says PlayWisely is a sensible approach compared to many others he’s encountered for addressing education within the appropriate developmental timelines for ages 0–5. Galanter notes that brain organization, or neural connectivity, starts at less than one week old and launches the cognitive machinery that shapes all future learning. Research has long shown that early experiences and interaction with the environment are central to a child’s brain development and continuously alter the structure of the brain – for better or worse.
Can a combination of flash cards and gravity-based physical activity be key to developing babies' and young kids’ brains? Former gymnast (and neuroscience enthusiast) Patty Hannan thinks so.
The deceptively simple, once-a-week 30-minute routine, offered in select schools and in private sessions in the Dallas-Fort Worth area, is designed to jump-start a child’s natural learning and movement capability. We’re not talking about traditional ABCs and 123s but a strategic triggering of the neural circuitry and sensory systems that will enable the fundamentals to fall into place efficiently later on.
The PlayWisely concept starts with something simple: flash cards, albeit on steroids. “Without too much heavy science explanation, it’s basically gymnastics on a card,”
Put simply, her method seeks to fire up kids’ brains to give them an edge in tackling milestones faster (such as speaking and potty training), as well as building necessary competencies such as focus at an earlier age.
It’s hard to believe anything can be effective when done only 30 minutes a week, but Motes says the key is the intensity of the program. PlayWisely can accomplish so much in these brief sessions because of the pace and focus of the interactions,
“We are laying the essential foundational wiring, so one day when your child finds her purpose, passion or talent, she can achieve her goals,
“Many of the activities in the PlayWisely sequences are similar to common play activities that actively involved parents engage in with their children, such as encouraging a child to point to a picture of a ‘cat’ in a book that the parent is reading to the child,” Motes says. “However, even involved parents typically engage in such activities unsystematically.”
my advice has always been to look and understand who we are as human beings and how we perceive each other.
friends that I admire are: wolf prix and frank gehry for all different reasons; steven holl; richard rogers, for all the reasons why he is totally fascinated in urbanism; and eric owen moss. they are characters who are fighting for their culture of architecture. all of them have somewhat different notions of its connection, whether this is focusing on sustainability or urbanism etc.
design symbolizes self-expression, and that the use of freedom frightens people. right now, there seems to be a lot of people moving with the herd and are comfortable doing that.