

It would still require a revolution.
I would like to believe that we could have a gradual transition without the revolution being needed, but… present political developments make revolution seem more likely.
It would still require a revolution.
I would like to believe that we could have a gradual transition without the revolution being needed, but… present political developments make revolution seem more likely.
or just propped up with something like UBI.
That depends entirely on how much UBI is provided.
I envision a “simple” taxation system with UBI + flat tax. You adjust the flat tax high enough to get the government services you need (infrastructure like roads, education, police/military, and UBI), and you adjust the UBI up enough to keep the wealthy from running away with the show.
Marshall Brain envisioned an “open source” based property system that’s not far off from UBI: https://marshallbrain.com/manna
Machine learning? It’s already had a huge effect, drug discovery alone is transformative.
Machine learning is just large automated optimization, something that was done for many decades before, but the hardware finally reached a power-point where the automated searches started out-performing more informed selective searches.
The same way that AlphaZero got better at chess than Deep Blue - it just steam-rollered the problem with raw power.
The cotton gin has been used as an argument for why slavery finally became unacceptable. Until then society “needed” slaves to do the work, but with the cotton gin and other automations the costs of slavery started becoming higher than the value.
Al Gore’s family thought that the political tide was turning against it, so they gave up tobacco farming in the late 1980s - and focused on politics.
Shareholders only care about the value of their shares increasing. It’s a productive arrangement, up to a point, but we’ve gotten too good at ignoring and externalizing the human, environmental, and long term costs in pursuit of ever increasing shareholder value.
CEOs are the figurehead, they are virtually bound by law to act sociopathically - in the interests of their shareholders over everyone else. Carl Icahn also has an interesting take on a particularly upsetting emergent property of our system of CEO selection: https://dealbreaker.com/2007/10/icahn-explains-why-are-there-so-many-idiots-running-shit
No big employer… there are plenty of smaller companies who are open to do whatever works.
Universal Base Income - it’s either that or just kill all the un-necessary poor people.
we will be no closer to some kind of 8-hour workweek utopia.
If you haven’t read this, it’s short and worth the time. The short work week utopia is one of two possible outcomes imagined: https://marshallbrain.com/manna1
We shouldn’t be using it to replace artists, writers, musicians, teachers, programmers, and actors.
That’s an opinion - one I share in the vast majority of cases, but there’s a lot of art work that AI really can do “good enough” for the purpose that we really should be freeing up the human artists to do the more creative work. Writers, if AI is turning out acceptable copy (which in my experience is: almost never so far, but hypothetically - eventually) why use human writers to do that? And so on down the line.
The problem is that capitalism and greedy CEOs are hyping the technology as the next big thing, looking for a big boost in their share price this quarter, not being realistic about how long it’s really going to take to achieve the things they’re hyping.
“Artificial Intelligence” has been 5-10 years off for 40 years. We have seen amazing progress in the past 5 years as compared to the previous 35, but it’s likely to be 35 more before half the things that are being touted as “here today” are actually working at a positive value ROI. There are going to be more than a few more examples like the “smart grocery store” where you just put things in your basket and walk out and you get charged “appropriately” supposedly based on AI surveillance, but really mostly powered by low cost labor somewhere else on the planet.
I’m about 50/50 between helpful results and “nope, that’s not it, either” out of the various AI tools I have used.
I think it very much depends on what you’re trying to do with it. As a student, or fresh-grad employee in a typical field, it’s probably much more helpful because you are working well trod ground.
As a PhD or other leading edge researcher, possibly in a field without a lot of publications, you’re screwed as far as the really inventive stuff goes, but… if you’ve read “Surely you’re joking, Mr. Feynman!” there’s a bit in there where the Manhattan project researchers (definitely breaking new ground at the time) needed basic stuff, like gears, for what they were doing. The gear catalogs of the day told them a lot about what they needed to know - per the text: if you’re making something that needs gears, pick your gears from the catalog but just avoid the largest and smallest of each family/table - they are there because the next size up or down is getting into some kind of problems engineering wise, so just stay away from the edges and you should have much more reliable results. That’s an engineer’s shortcut for how to use thousands, maybe millions, of man-years of prior gear research, development and engineering and get the desired results just by referencing a catalog.
I think a lot depends on where “on the curve” you are working, too. If you’re out past the bleeding edge doing new stuff, ChatGPT is (obviously) going to be pretty useless. But, if you just want a particular method or tool that has been done (and published) many times before, yeah, it can help you find that pretty quickly.
I remember doing my Masters’ thesis in 1989, it took me months of research and journals delivered via inter-library loan before I found mention of other projects doing essentially what I was doing. With today’s research landscape that multi-month delay should be compressed to a couple of hours, frequently less.
If you haven’t read Melancholy Elephants, it’s a great reference point for what we’re getting into with modern access to everything:
If you were too lazy to read three Google search results before, yes… AI is amazing in that it shows you something you ask for without making you dig as deep as you used to have to.
I rarely get a result from ChatGPT that I couldn’t have skimmed for myself in about twice to five times the time.
I frequently get results from ChatGPT that are just as useless as what I find reading through my first three Google results.
AI search is occasionally faster and easier than slogging through the source material that the AI was trained on. The source material for programming is pretty weak itself, so there’s an issue.
I think AI has a lot of untapped potential, and it’s going to be a VERY long time before people who don’t know how to ask it for what they want will be able to communicate what they want to an AI.
A lot of programming today gets value from the programmers guessing (correctly) what their employers really want, while ignoring the asks that are impractical / counterproductive.
AI makes every aspect where it’s being used a lot more productive and easier.
AI makes every aspect where it’s being used well a lot more productive and easier.
AI used poorly makes it a lot easier to produce near worthless garbage, which effectively wastes the consumers’ time much more than any “productivity gained” on the producer side.
Machine stitching is objectively worse than hand stitching, but… it’s good enough and so much more efficient, so that’s how things are done now; it has become the norm.
being against progress for that reason is just stupid.
Under the current economic model, being against progress is just self-preservation.
Yes, we could all benefit from AI in some glorious future that doesn’t see the AI displaced workers turned into toys for the rich, or forgotten refuse in slums.
Maybe that’s because they’re using AI to replace people, and the AI does a worse job.
Meanwhile, the people are also out of work.
Lose - Lose.
Yes, and no: https://www.npr.org/2025/02/11/g-s1-47352/why-economists-got-free-trade-with-china-so-wrong