My experience with generative-AI has been that, at its very best, it is subtly wrong in ways that only an expert in the relevant subject would recognise.
-
@fozztexx @jonathanhogg there are quite a few, Lego even have a visual programming language for their smart bricks (I think Python is officially supported, unlike not quite C for Mindstorms).
There's also a visual language for a smart RC/drone controller built by that one guy
iforce2d is the guy.But they're generally hard to to anything nontrivial with and very hard to debug. Like Excel/Calc... so easy to have subtle errors even in simple programs that it's considered inevitable.
-
@jonathanhogg No, it's still difficult to program something so that it's exactly how you want it to be. It's apparently been underestimated how often that doesn't matter ("mostly working app" where getting it to working is more effort than starting from scratch), but we will see how that develops in the long run. Maybe plausible deniability is really enough for many things.
Nobody is gatekeeping clear, testable requirements and communication without misunderstandings. People usually just can't do that.
@dasgrueneblatt @jonathanhogg I also build bicycles (and have worked in a bike shop), and it sometimes shocks me how closely it parallels software.
People who actually ride a bicycle more than once don't want "a bike, any bike", they want a bike that does X, Y, and Z. They often can't articulate that at the start, usually because they don't know it.
But once they ride for a bit they want a comfortable riding position, gears and brakes that work, mudguards, etc.
Just like software
-
My experience with generative-AI has been that, at its very best, it is subtly wrong in ways that only an expert in the relevant subject would recognise. So I don't worry about us creating super-intelligent AI, I worry about us allowing that expertise to atrophy through laziness and greed. I refuse to use LLMs not because I'm scared of how clever they are, but because I do not wish to become stupider.
"AI is a genius on every subject except that one thing you're extremely knowledgeable about."
-
My experience with generative-AI has been that, at its very best, it is subtly wrong in ways that only an expert in the relevant subject would recognise. So I don't worry about us creating super-intelligent AI, I worry about us allowing that expertise to atrophy through laziness and greed. I refuse to use LLMs not because I'm scared of how clever they are, but because I do not wish to become stupider.
@jonathanhogg HyperCard wasn't my first programming/development experience*, but it was the one that galvanized the core of a coder within my being.
I wish I had kept The HyperCard Bible that the library had conveniently forgotten that I had on loan for years; heavily thumbed through and dog-eared.Even now I'm pressured to use LLMs to code with. "It's like having a hundred intern's work ready in seconds"... That's not the flex you think it is boss.
Your existing code is so tightly coupled and monolithic, no seperation of concerns, and a culture like a workaholic traveling rodeo show. An LLM looking at what you've got isn't going to produce Art of Code level advice... You're going to get 3am it complies finally, push to prod before it breaks advice.* it was the second
-
My experience with generative-AI has been that, at its very best, it is subtly wrong in ways that only an expert in the relevant subject would recognise. So I don't worry about us creating super-intelligent AI, I worry about us allowing that expertise to atrophy through laziness and greed. I refuse to use LLMs not because I'm scared of how clever they are, but because I do not wish to become stupider.
@jonathanhogg And, fun fact, but the evidence we have so far is that dependance on LLM's does legit harm your cognitive abilities.
As someone who actually builds things, I have to understand how they work, not ask the magic box to build them for me and hope against hope they actually function.
-
@eschaton It looks cute, though curious to build such a faithful homage but ditch the most interesting thing about HyperCard – the HyperTalk language
@jonathanhogg I think the author would disagree that HyperTalk was the most interesting thing about HyperCard, especially since they put a lot of work into crafting a language they feel is comfortable for such a use. (At least they didn’t just use JS or Lua…)
-
You know what? HyperCard was a glorious moment in time that I dearly miss: an army of non-experts were bashing together and sharing weird and wonderful stacks that were part 'zine, part adventure game and part database. Instead of laughing at vibe-coders, maybe we should ask ourselves why the current state-of-the-art in beginner-friendly programming tools is a planet-boiling roulette wheel.
jonathanhogg@mastodon.social Wow—I will be borrowing that lovely phrase “planet boiling roulette wheel.” That sums it up so well!
-
My experience with generative-AI has been that, at its very best, it is subtly wrong in ways that only an expert in the relevant subject would recognise. So I don't worry about us creating super-intelligent AI, I worry about us allowing that expertise to atrophy through laziness and greed. I refuse to use LLMs not because I'm scared of how clever they are, but because I do not wish to become stupider.
@jonathanhogg Excellent thread.
-
I will say one thing for generative AI: since these tools function by remixing/translating existing information, that vibe programming is so popular demonstrates a colossal failure on the part of our industry in not making this stuff easier. If a giant ball of statistics can mostly knock up a working app in minutes, this shows not that gen-AI is insanely clever, but that most of the work in making an app has always been stupid. We have gatekeeped programming behind vast walls of nonsense.
@jonathanhogg apparently I vigorously agreed four days prior https://mastodon.social/@glyph/116072915065555041
-
You know what? HyperCard was a glorious moment in time that I dearly miss: an army of non-experts were bashing together and sharing weird and wonderful stacks that were part 'zine, part adventure game and part database. Instead of laughing at vibe-coders, maybe we should ask ourselves why the current state-of-the-art in beginner-friendly programming tools is a planet-boiling roulette wheel.
@jonathanhogg The reason we don't have HyperCard and the reason ppl think slop coding is productive are one in the same: bad language & framework design imposing ridiculous amounts of boilerplate code on everything. This both makes it hard for beginners to do anything and makes it so there's a lot of "work" (busy work) that a sloppy pattern copying machine can do decently well at.
-
@StaceyCornelius @jonathanhogg is a descendants/replacement extant now?
@Photo55 @StaceyCornelius @jonathanhogg Livecode is sort of descended from a Hypercard clone (https://livecode.com). And there are a number of runtime engines for old-school Hypercard decks (https://archive.org/details/hypercardstacks?&sort=-downloads). There’s also Decker, which is a spiritual inheritor (https://beyondloom.com/decker/).
Dang I miss Hypercard.
-
@jonathanhogg @michael @jarkman I once asked a very senior HPC developer at Red Hat what keeps him up at night and he said, paraphrasing and pulling from memory that's about 15 years old now, "we haven't created new computer science since the 1960s and I fear we'll exhaust what we know before we discover anything new," and I think about that a lot these days.
@thatsten
The 1960s were mostly math because most CS was done on blackboards (as one of my profs put it) because access to machines was very limited. Also, there was a "Cambrian explosion" of ideas in this new field - and after that, evolution slowed down. -
@michael @jarkman @jonathanhogg (IMO) we can't have more DSLs because everything useful is now plumbed together from a series of heterogenous parts and we've somehow decided they can only interoperate at the (barbaric) C ABI level, or the (absurdly inefficient) web level. So, we rely on general purpose languages using specialised libraries, instead of the other way around.
I think fixing this boundary/contract problem would fix a lot in s/w engineering.@tobyjaffey
gRPC is pretty efficient, although Erlang is a better abstraction. -
To me, all these people crowing about having written 10k lines of code in a day are idiots. If you need to write that much code in a day, you are manifestly working at the wrong level of abstraction to solve your problem.
Heap of 10K lines and they probably have NO idea what is even going on in there.
-
@jonathanhogg yep. And if they're working on an operating system (or any related system software, or anything that needs to stay up and running), they're committing malpractice that's going to get a lot of people killed:
https://mastodon.social/@JamesWidman/116133223470110717Skynet didn't destroy the world by getting too smart-- it actually just started glitching and chasing its own tail in gibbering circles and everything broke.
-
You know what? HyperCard was a glorious moment in time that I dearly miss: an army of non-experts were bashing together and sharing weird and wonderful stacks that were part 'zine, part adventure game and part database. Instead of laughing at vibe-coders, maybe we should ask ourselves why the current state-of-the-art in beginner-friendly programming tools is a planet-boiling roulette wheel.
Flash Studio was like that too, even though it was a trap. -
Skynet didn't destroy the world by getting too smart-- it actually just started glitching and chasing its own tail in gibbering circles and everything broke.
I mean, it invented time travel, so gibbering circles were pretty much inevitable. As I understood it, the question was not how to destroy the world or eliminate humanity, but how to do so in a way that fails due to time travel, but ends up with the next iteration of Skynet being just a little bit more effective. It was like... playing the villain to motivate the humans to improve it, as the only way to solve the problems it was presented with.
And I mean, if you destroy the world and kill (almost) all humans, then change history so it didn't happen, then it didn't happen! Right?
Like that one Wakfu villain, except it was actually working.
CC: @JamesWidman@mastodon.social @jonathanhogg@mastodon.social -
My experience with generative-AI has been that, at its very best, it is subtly wrong in ways that only an expert in the relevant subject would recognise. So I don't worry about us creating super-intelligent AI, I worry about us allowing that expertise to atrophy through laziness and greed. I refuse to use LLMs not because I'm scared of how clever they are, but because I do not wish to become stupider.
@jonathanhogg that’s exactly why it’s performing so well in the corporate world
-
My experience with generative-AI has been that, at its very best, it is subtly wrong in ways that only an expert in the relevant subject would recognise. So I don't worry about us creating super-intelligent AI, I worry about us allowing that expertise to atrophy through laziness and greed. I refuse to use LLMs not because I'm scared of how clever they are, but because I do not wish to become stupider.
@jonathanhogg I use LLM to verify they are still stupid as shit compared to me.
"Why don't you use chatgpt as everyone else"
"Because it generates 6 errors in 10 lines of code" -
@bit101 hold on, I've got another post incoming on exactly this…

@jonathanhogg @bit101 jaha. I asked an LLM to make me an URL shortener website.
I read through the code, and saw "interesting" ways of doing SQL.
Me: "is this code secure?"
ChatGPT: "of course it is not secure"No vibe coder ever asks that question to its bullshit generator.