It’s easy to point to a computer-created essay, song or illustration and find the defects or errors. Given hard work by 1,000 trained people, it’s likely that a human could make something more useful or inspired than a computer could.
But the real impact of AI isn’t going to be that it regularly and consistently does far better than the best human effort.
The impact will be that it is widespread, cheap and always there.
Search for anything and the Wikipedia page will ‘write itself’ just for you.
Brainstorm 12 variations of a solution to any problem you’re thinking about. Have a Rogerian therapist and idea coach on call at all times.
Press a button on your fridge and see a dozen recipes that use what’s in the produce drawer, and just that.
Everywhere, all the time.
Ubiquity is the quiet change we rarely see coming.
Many small businesses start with generosity and good intent at their core. But it’s a rough ride, and especially when outside funding is involved, it’s easy to get seduced by the bright lights of Milton Friedman and an obsession with short-term profits.
Over time, purpose starts to fade. The urgencies and demands of quarterly results, the opportunities for growth followed by more growth make it ever more difficult to stick with what we set out to do in the first place.
This post from Ari Weinzweig highlights a different way to stay on track, adding a level of structure to the good intent. It takes the sometimes mushy language of a B Corp and makes it legally and permanently part of the deal.
By codifying the structure from the start, we’re creating organizations that have boundaries. Boundaries aren’t necessarily a defect–they can be a feature. A boundary gives us something to lean against (leverage) and it also communicates to our constituents exactly what we’re here to do.
For a long time, we’ve been evolving in only one direction–companies that seek nothing but short-term investor returns, but do a lot of hand-waving along the way.
In a post-industrial business environment where people are more important than machines and where the consequence of our work are more vivid, it makes sense to bring intent to the forefront and keep it there.
The stable owner gets to pick which horse you get. Take it or leave it.
Some people prefer this. It means that we’re off the hook and not responsible. It relieves us of the emotional labor of choice. Let someone else worry about it…
And so we give up our agency and our freedom, simply to avoid responsibility.
The thing is, there is still a choice: The choice of whether or not to go into the stable in the first place.
Every time we choose a job, cast a ballot (or choose not to), or select a path, we’re making a choice. What happens after that is still our responsibility.
It’s a fascinating payment model. For digital goods and other transactions where the marginal cost of one more sale approaches zero, “pay what you want” exposes how complicated the story we tell about money can be. When we add in the charity component, it becomes even more layered.
The Best of Akimbo (volume 1) is now available as a pay-what-you-want download. 100% of what you pay is a donation that goes directly to charity: water. The details are all here.
There are more than five years worth of weekly episodes of my Akimbo podcast now available, and my producer Alex DiPalma and I have put together a five-episode best of. No ads, of course, no QA, just some culture-gazing you can dive into and even share. Other episodes are available wherever you get your podcasts.
To date, readers of this blog have helped 34,000 people get a reliable source of clean water, with more than a million dollars donated so far. It’s hard to imagine something more generous, more life-changing or more urgent than bringing water to someone who needs it. Thank you.
A business that says its mission is to, “reinvent local commerce to better serve our customers and neighborhoods,” can spend a lot of time doing not much of anything before they realize that they’re not actually creating value.
A non-profit that seeks to create “fairness and equity” can also fall into a non-specific trap.
Far more useful to say, “we sell a good cup of coffee at a fair price,” and see if you can pull that off first.
Google claims they want to organize the world’s information. But they began by simply building a search engine that people would switch to.
We need a goal. But the more specific and measurable, the better.
I’ve been fascinated by the way we set type since I did my first packaging forty years ago. It’s a combination of tech, art, systems, culture and most of all, deciding to put in the effort to get it right.
[This is a long post, it would have been a podcast, but it doesn’t really lend itself to audio.]
When airplanes first started flying passengers, there was a need for labels. Labels for passengers and pilots. WEAR SEATBELT WHEN SEATED. Why is it in all caps? My guess is that at the dawn of aviation, the machine that made the little metal signs only had the capacity to easily handle 26 letters, and they choose all caps. Certainly, over time the labeling tech got better, but we stuck with all caps because that’s what airplane signs are supposed to be like, even though they’re more difficult to read that way.
Typography is a signal not just a way to put letters on a page.
Before mechanical type was set by pressmen in the basements of newspapers, type was handwritten by monks. As a result, we see the beautiful kerning of letters, nestling the ‘a’ under the ‘W’. That takes effort and as a result, it simply looks right. It’s not right because your brain demands kerning, it’s right because the signal is something we associate with confidence and care.
Once we see the magic of kerning, it becomes impossible to avoid how careless people who don’t use it appear to be…
There have been many golden ages of typography, but the 60s and 70s saw a combination of high-stakes mass production (in ads and media) combined with innovations in typesetting that meant that instead of using handmade metal type, marketers could simply spec whatever they imagined. It also meant that instead of one person working on a document, a committee would spend days or weeks agonizing over how an ad looked, or whether the new layout of Time magazine would send the right message to millions of people each week.
Pundits were sure that the launch of the Mac would destroy all of this progress. Now that anyone could set type, anyone would. So resumes ended up looking like ransom notes, Comic Sans became a joke that was taken seriously by some, and folks like David Carson set type on fire.
Instead, the Mac and the laser printer pushed the best examples of type quality forward. Once again, culture combined with tech to create a new cycle. Now, small teams of people working on small projects could also agonize about type. Now, as beautiful typefaces increased in availability and diversity, it was possible to set more type, more beautifully. If you worked in an industry or segment where the standard demanded careful expression through type, it was possible and it was expected.
More good type, a lot more lazy type.
And then smart phones arrived.
And the type culture changed in response. If you don’t have a mouse or a keyboard, if your screen is the size of a deck of playing cards, you’re probably not being very careful with typography. Whatever is built in is what you use. People create so much content that there’s no time for meetings, for care, for awareness. Speech to text, type with your thumbs, take a picture, hit send.
The culture shifts. Now, the appearance of authenticity matters more than ever. And one way to do that is to not put on airs with fonts that remind us of craft, or kerning that reminds us that you took the time to do something more than the automatic minimum.
And this won’t last, because the cycles continue.
They say you can tell a lot about someone from their handwriting. For my professional life, my handwriting has always involved a keyboard. I know that even if people don’t consciously know that they’re judging the way our words look or sound, they are.
For many, the goal is to be the deciding vote, the donation that gets a cause over the goal, the person who counts.
And often, we enjoy piling on. Once the cause or fashion or tech is clearly working, it’s easy and fun to say “me too.”
More rare, more vulnerable and more important is to decide to show up in the lonely zone. When it might not work. When the originator really needs your support. When speaking up, donating or simply showing up feels like a risk or a waste.
Human beings are often more effective when we’re a bit self-effacing. “I think,” “Perhaps,” or “I might be missing something, but…” are fine ways to give our assertions a chance to be considered.
The solar-powered LED calculator we used in school did no such thing. 6 x 7 is 42, no ifs, ands or buts.
Part of the magic of Google search was that it was not only cocky, it was often correct. The combination of its confidence and its utility made it feel like a miracle.
Of course, Google was never completely correct. It rarely found exactly the right page every time. That was left to us. But the aura of omnipotence persisted–in fact, when Google failed, we were supposed to blame evil black-hat SEO hackers, not an imperfect algorithm and a greedy monopolist.
And now, ChatGPT shows up with fully articulated assertions about anything we ask it.
I’m not surprised that one of the biggest criticisms we’re hearing, even from insightful pundits, is that it is too confident. That it announces without qualification that biryani is part of a traditional South Indian tiffin, but it’s not.
Would it make a difference if every single response began, “I’m just a beta of a program that doesn’t actually understand anything, but human brains jump to the conclusion that I do, so take this with a grain of salt…”
In fact, that’s our job.
When a simple, convenient bit of data shows up on your computer screen, take it with a grain of salt.
Not all email is spam.
Not all offers are scams.
And not all GPT3 responses are incorrect.
But it can’t hurt to insert your own preface before you accept it as true.
Overconfidence isn’t the AI’s problem. There are lots of cultural and economic shifts that it will cause. Our gullibility is one of the things we ought to keep in mind.