The page is loading...
Back

artificial intelligence

Six years ago, we decided to change the world of analytics. We were trying to answer a hard question—maybe an impossible one. Now, at least part of that vision is accomplished. We helped establish the new field of augmented analytics, got acquired, grew to 400 enterprise customers, and now the team behind the product has over 100 people.

10-year Horizon
What’s Important? The Surprising Benefit of Asking Hard Questions

My favourite way to describe how to think about machine learning is as follows. Imagine sitting in a plane as it lands. It’s just a typical Boeing descending onto a Newark runway. The pilot hands the landing over to the autopilot. If one could look through the autopilot’s eyes, they would see not one, but millions of landings, all overlayed over the actual one. Past landings onto Newark. Past landings of Boeings at other airports, at various times and weather conditions. These millions of landings would give context to the actual landing, informing the autopilot about the best way to approach the runway, what’s normal, what’s an anomaly, what’s a known problem and known solutions to it, and when to abort. This is what a machine learning algorithm “sees,” and how it learns with each new landing happening around the world. Do you see those millions of landings enveloping you as you sit in that plane?

Once you start seeing that invisible web, several things happen. You grasp the aviatic context of your landing, which transforms your experience. It allows you to trust the autopilot and understand how safe you are. Each past landing is literally supporting you!

10-Year Horizon
Seeing the Invisible (And How to Think About Machine Learning)
It’s been two months now since I quit working in the company that bought my startup. Two months into this phase of the void. I was looking forward to it; I also felt the fear. It’s a phase in between projects, in between lives. A transition.

The possibility of sharp jumps in intelligence also implies a higher standard for Friendly AI techniques. The technique cannot assume the programmers’ ability to monitor the AI against its will, rewrite the AI against its will, bring to bear the threat of superior military force; nor may the algorithm assume that the programmers control a “reward button” which a smarter AI could wrest from the programmers; et cetera. Indeed no one should be making these assumptions to begin with. The indispensable protection is an AI that does not want to hurt you. Without the indispensable, no auxiliary defense can be regarded as safe. No system is secure that searches for ways to defeat its own security. If the AI would harm humanity in any context, you must be doing something wrong on a very deep level, laying your foundations awry. You are building a shotgun, pointing the shotgun at your foot, and pulling the trigger.

Eliezer S. Yudkowsky
Artificial Intelligence as a Positive and Negative Factor in Global Risk