Perhaps so, but is that an AI issue or a billionaire tech bro issue? It feels more the latter than the former - and I’d argue the two aren’t as easily separable as that distinction implies.
The people building this stuff largely are the problem, which makes it an AI issue by default.
My read of the poster above is that they’re pointing towards the knee jerk reaction AI discussions cause.
Mention AI and you invariably spark off “online experts” who argue in bad faith - and that bad faith cuts both ways, dismissing legitimate concerns and overstating them in equal measure.
There’s a lot more nuance to this issue than commonly presented.
For anyone actually wanting to engage with the substance rather than the noise:
Be honest. Did you have an LLM write that? Because boy howdy, does it read the way LLMs output text, right down to missing the point.
Regardless, in only one place did I mention climate change effects, and that was in passing as the last item in a list of issues with AI and LLMs in particular. That was on purpose.
You can throw out the environment as an argument entirely, and accept Andy Masley’s entire premise (I don’t), and AI still has much for which it needs to reckon.
For someone who claims to be versed in logical fallacies, you do like to bandy about the old strawman. I didn’t dismiss the concerns you raised, I reframed them. There’s a difference. Pointing out that the harms you listed are primarily harms of concentrated corporate power isn’t missing your point, it’s pointing at the root cause.
If Palantir didn’t exist, the surveillance state doesn’t disappear. If Altman vanished tomorrow, the RAM supply chain doesn’t magically recover. The tool is downstream of the incentive structure.
As for the LLM accusation, no I wrote that myself. Though I’ll note the irony of deploying an ad hominem to dodge the substance, especially after opening with a lecture about fallacies. Cute. If my prose is too structured for your taste, that’s a you problem.
The Masley link stands. Engage with it or don’t, but the knee-jerk “that reads like AI” isn’t the ‘aha’ you think it is.
Perhaps so, but is that an AI issue or a billionaire tech bro issue? It feels more the latter than the former - and I’d argue the two aren’t as easily separable as that distinction implies.
The people building this stuff largely are the problem, which makes it an AI issue by default.
My read of the poster above is that they’re pointing towards the knee jerk reaction AI discussions cause.
Mention AI and you invariably spark off “online experts” who argue in bad faith - and that bad faith cuts both ways, dismissing legitimate concerns and overstating them in equal measure.
There’s a lot more nuance to this issue than commonly presented.
For anyone actually wanting to engage with the substance rather than the noise:
https://blog.andymasley.com/p/a-cheat-sheet-for-conversations-about
That link is worth your time before wading in.
Be honest. Did you have an LLM write that? Because boy howdy, does it read the way LLMs output text, right down to missing the point.
Regardless, in only one place did I mention climate change effects, and that was in passing as the last item in a list of issues with AI and LLMs in particular. That was on purpose.
You can throw out the environment as an argument entirely, and accept Andy Masley’s entire premise (I don’t), and AI still has much for which it needs to reckon.
For someone who claims to be versed in logical fallacies, you do like to bandy about the old strawman. I didn’t dismiss the concerns you raised, I reframed them. There’s a difference. Pointing out that the harms you listed are primarily harms of concentrated corporate power isn’t missing your point, it’s pointing at the root cause.
If Palantir didn’t exist, the surveillance state doesn’t disappear. If Altman vanished tomorrow, the RAM supply chain doesn’t magically recover. The tool is downstream of the incentive structure.
As for the LLM accusation, no I wrote that myself. Though I’ll note the irony of deploying an ad hominem to dodge the substance, especially after opening with a lecture about fallacies. Cute. If my prose is too structured for your taste, that’s a you problem.
The Masley link stands. Engage with it or don’t, but the knee-jerk “that reads like AI” isn’t the ‘aha’ you think it is.