It does not need a citation. There is no citation. What it needs, right now, is optimism. Optimism is not optional when it comes to doing new things in the world. The "needs citation" is reserved for people who do nothing and chose to be sceptics until things are super obvious.
Yes, we are clearly talking about things to mostly still come here. But if you assign a 0 until its a 1 you are just signing out of advancing anything that's remotely interesting.
If you are able to see a path to 1 on AI, at this point, then I don't know how you would justify not giving it our all. If you see a path and in the end using all of human knowledge up to this point was needed to make AI work for us, we must do that. What could possibly be more beneficial to us?
This is regardless of all issues the will have to be solved and the enormous amount of societal responsibility this puts on AI makers — which I, as a voter, will absolutely hold them accountable for (even though I am actually fairly optimistic they all feel the responsibility and are somewhat spooked by it too).
But that does not mean I think it's responsible to try and stop them at this point — which the copyright debate absolutely does. It would simply shut down 95% of AI, tomorrow, without any other viable alternative around. I don't understand how that is a serious option for anyone who roots for us.
If you are going to make a bold assertive claim without evidence to back it up, then change your argument to "my assertion requires optimism.. trust me on this", then perhaps you should amend your original statement.
This is an astonishing amount of nonsensical waffle.
Firstly, *skeptics.
Secondly, being skeptical doesn't mean you have no optimism whatsoever, it's about hedging your optimism (or pessimism for that matter) based on what is understood, even about a not-fully-understood thing at the time you're being skeptical. You can be as optimistic as you want about getting data off of a hard drive that was melted in a fire, that doesn't mean you're going to do it. And a skeptic might rightfully point out that with the drive platters melted together, data recovery is pretty unlikely. Not impossible, but really unlikely.
Thirdly, OpenAI's efforts thus far are highly optimistic to call a path to true AI. What are you basing that on? Because I have not a deep but a passing understanding of the underlying technology of LLMs, and as such, I can assure you that I do not see any path from ChatGPT to Skynet. None whatsoever. Does that mean LLMs are useless or bad? Of course not, and I sleep better too knowing that LLM is not AI and is therefore not an existential threat to humanity, no matter what Sam Altman wants to blither on about.
And fourthly, "wanting" to stop them isn't the issue. If they broke the law, they should be stopped, simple as. If you can't innovate without trampling the rights of others then your innovation has to take a back seat to the functioning of our society, tough shit.
If you are going to make a bold assertive claim without evidence to back it up, then change your statement to my assertion requires "optimism.. trust me on this", then perhaps you should amend your original statement.
Skeptics require proof before belief. That is not mutually exclusive from having hypotheses (AKA vision).
I think you raise some interesting concerns in your last paragraph.
> enormous amount of societal responsibility this puts on AI makers — which I, as a voter, will absolutely hold them accountable for
I'm unsure of what mechanism voters have to hold private companies accountable. Fir example, whenever YouTube uses my location without me ever consenting to it - where is the vote to hold them accountable? Or when Facebook facilitates micro targeting of disinformation - where is the vote? Same for anything AI. I believe any legislative proposals (with input from large companies) is very likely more to create a walled garden than to actually reduce harm.
I suppose no need to respond, my main point is I don't think there is any accountability thru the ballot when it comes to AI and most things high-tech.
People who have either no intention of holding someone/something to account, or who have no clue about what systems and processes are required to do so, always argue to elect/build first, and figure out the negatives later.
The company spearheading AI is blatantly violating its non-profit charter in order to maximize profits. If the very stewards of AI are willing to be deceptive from the dawn of this new era, what hope can we possibly have that this world-changing technology will benefit humanity instead of funneling money and power to a select few few oligarchs?
Yes, we are clearly talking about things to mostly still come here. But if you assign a 0 until its a 1 you are just signing out of advancing anything that's remotely interesting.
If you are able to see a path to 1 on AI, at this point, then I don't know how you would justify not giving it our all. If you see a path and in the end using all of human knowledge up to this point was needed to make AI work for us, we must do that. What could possibly be more beneficial to us?
This is regardless of all issues the will have to be solved and the enormous amount of societal responsibility this puts on AI makers — which I, as a voter, will absolutely hold them accountable for (even though I am actually fairly optimistic they all feel the responsibility and are somewhat spooked by it too).
But that does not mean I think it's responsible to try and stop them at this point — which the copyright debate absolutely does. It would simply shut down 95% of AI, tomorrow, without any other viable alternative around. I don't understand how that is a serious option for anyone who roots for us.