If AI is only a “parrot” as you say, then why should there be worries about extinction from AI?
You should look closer who is making those claims that “AI” is an extinction threat to humanity. It isn’t researchers that look into ethics and safety (not to be confused with “AI safety” as part of “Alignment”). It is the people building the models and investors. Why are they building and investing in things that would kill us?
AI doomers try to 1. Make “AI”/LLMs appear way more powerful than they actually are. 2. Distract from actual threats and issues with LLMs/“AI”. Because they are societal, ethical, about copyright and how it is not a trustworthy system at all. Cause admitting to those makes it a really hard sell.
You’d think so. But at least Germany struggles massively with missing personnel to staff trains (which includes roles beyond the driver). As far as I know there is no automated solution on the horizon for any form or scale of train traffic. The only self driving trains I have experienced require tight control of the rail environment (entirely underground or lifted above the surface) and special stations with airlocks.
Maybe there is just more money in self-driving cars. But I’m pretty sure they will happen before wide spread automated trains. Which sucks.