Author Archives: Tom Breloff

Pelonalysis – Peloton Data with Python/Pandas/Matplotlib

By: Tom Breloff

Re-posted from: http://www.breloff.com/pelonalysis/

One year ago, Peloton seemed like a niche workout fad that wouldn’t last. And then the pandemic hit, and my wife easily convinced me that it would be worth the investment. I quickly became obsessed with pushing my limits and hitting new personal records.

A couple of months ago, I noticed a drop off in my output. It came as a surprise, and I started to wonder if the bike had become miscalibrated over time. I discovered that you can download your workout history from the Peloton site, and decided it was time to dig into the numbers. Did my workout output, frequency, or content change to justify a 10% drop in ability?

I downloaded my data and put together a Jupyter notebook to visualize my progress. After a bit of exploring, I made a bare-bones repo and pushed it up to github: pelonalysis.

My discovery? I was working twice as much during my initial months!

Sadly I can no longer blame my bike. Time to get to work!

Trains, planes, and automobiles

By: Tom Breloff

Re-posted from: http://www.breloff.com/ai-isa-car/

The recent disagreements (for example the face-off between Gary Marcus and Yann LeCun, or Gary’s recent NY Times article and the ensuing Twitter conversations, or Judea Pearl’s discussion of his new book) about whether we’re on the “right track” in AI research seem a little strange to me.

As I’m a fan of metaphors, I’ll liken the goal of general AI to the goal of inventing a car.

Logic. All the cool kids are doing it.

Decades ago, researchers had a big effort in knowledge engineering and logical reasoning. The path towards human-level AI seemed obvious; we must encode enough world knowledge into a system, and give it the correct algorithms to draw on that world knowledge.

In my metaphor, they were designing steering wheels, speedometers, and other devices that were intended for human understanding and communication while driving.

We made great progress, but that work was largely unusable for practical applications. I mean seriously… who wants a steering wheel without anything to drive?! Many critics concluded that the efforts were wasted, and that the approaches were fundamentally flawed.

Just kidding… Neural networks are the answer.

Also decades ago, we saw the invention of the core mathematics and inventions driving advances in neural networks. Backpropagation, feed-forward neural nets, convolutional layers, and Long-Short-Term-Memory (LSTM) are examples of things that existed a long time ago, but have only hit their stride in the last few years.

In my metaphor, we developed the wheels and chassis, along with the required physics understanding of torque and friction.

However, despite high expectations, this also disappointed initially, since what good is a cart without a horse to pull it? This disappointment led to an AI winter, until…

I need a new gaming PC… to do research (I promise!)

In the early part of this decade, a group of researchers discovered that with enough compute power, old techniques using neural networks could perform very well in sensory-motor domains such as image recognition and speech-to-text. They had found a horse to pull their dusty old cart!

Rather, they now had an engine! Combining the “engine” with the “wheels” allowed for numerous applications, with seemingly limitless possibilities. However, all that power is limited to very narrow domains: asking Echo to play a song, or getting calendar entries from Siri, or making a haircut appointment with Duplex.

Without the ability to interface seamlessly with humans, the technology must stay “on the rails”, like a train.

Now, no one can deny that railways were both useful and transformative. They paced the way for efficient trade and globalization of economies. But while railways were important for industry, they had less impact on the day to day lives of people. It was, quite simply, not as transformative as the automobile.

Back on topic: “What is the right track for AI?”

We’ve separately invented the steering wheel and controls (world knowledge and logical reasoning), the tires and the chassis (statistical models aka Deep Learning), and a powerful engine (CPUs, GPUs, and TPUs… oy my!). But we don’t have a car (AI).

I think Gary, Judea, and others simply feel this: no one has invented the car, and we won’t get there by improving the tires. We shouldn’t choose between logic or statistics… we should build a system that uses both! It works for humans, after all.

How about Software 1.5 instead?

By: Tom Breloff

Re-posted from: http://www.breloff.com/software-one-point-five/

I recently read Andrej Karpathy’s recent blog post proclaiming that we are entering an era of “Software 2.0”, where traditional approaches to developing software (a team of human developers writing code in their programming language of choice… i.e. v1.0) will become less prevalent and important.

Instead, the world will be run by neural networks. Why not? They’re really great at recognizing objects in images, winning at board games, and even writing movie scripts. (Well maybe not movie scripts.)

I can’t decide if he’s being naive or if we should be scared (no… not from an army of infinitely intelligent super-robots).

Is he naive?

Neural networks are very powerful. There’s no question. But human software engineers do more than just pattern match inputs into outputs. In software development, it’s not enough to produce correct outputs 99% of the time (though even that is seemingly unachievable for most complex tasks). Imagine if your bank deposits only landed in the right account 99% of the time. Or if an air traffic control tower only assured your plane would land safely 99% of the time.

There are too many tasks that require near-certain guarantees on performance. And most importantly, many of those tasks require full human understanding of the processes and algorithms which determine the outcome. This is something we simply cannot expect from end-to-end neural (statistical) models.

I think he’s naive for claiming that statistical modeling can replace good ol’ fashion software engineering.

Should we be scared?

Neural networks are fragile, complicated, opaque, compute-heavy, and easily tricked. They are simultaneously hard to understand and easy for bad actors to manipulate. But… they get some amazing results in certain domains (most notably sensorial tasks like vision, hearing, and speech).

Humans are gullible animals. We have implicit biases, and constantly change the facts to match our understanding of the world. In a world filled with Software 2.0, where the software programs are written by statistical models, the output of that software will start to look like magic. So much so that people will start to believe that it is magic.

Throughout history, people have been happy to worship and serve a power greater than them. What if people start to believe in computing magic, and trust important life decisions to a statistical model? Insurance companies might deny your coverage because a neural network told them a procedure wouldn’t help you. Employers will discriminate based on expected performance. Police will monitor and arrest people through statistical profiling, predicting crime that hasn’t yet happened. Courts will prosecute and sentence based on expectations of repeat offense.

You might be saying… “This is already happening!” I know. I think we should be scared of relying on statistical models without properly accounting for their biases and shortcomings.

It’s both.

Just like the spreading IoT time bomb, placing blind trust in Software 2.0 is a trojan horse. We let it into our lives without full understanding, and it puts us at risk in ways we can’t realize.

The path forward is in developing human-led technology. Building machines that can help and advise, but do not assert full control. We shouldn’t worship a machine, and we shouldn’t put our blind trust in statistical methods. Humans are more than just pattern matchers. We can transfer our experience to new environments. We can plan and reason, without having to fail at a task millions of times first.

Instead of rushing to Software 2.0, lets view neural networks in proper context: they are models, not magic.