So, not following this as closely as I should but my brother does what he can to keep me in the loop by sending me podcasts by those "in the know", the last, Mo Gawdat on "Diary of a CEO". 

Now, Mo Gawdat is a bit of the go-to person regarding Chat GPT and AI, for reasons which become transparent when you watch him being interviewed. 

He's the air of the all-wise pop-tech guru, a charming, flattering guest, gifted with a certain genius of platitudes, a social commerce, but it becomes apparent shortly in that his expertise is not adequate to the task of foreseeing where this is going.

He's a victim of the fallacy that success implies intelligence or inside information when in fact it often just implies try and trying again. We all know that person,  the successful idiot, owner of a successful restaurant who would generalize that his/her success is proof of a job well done when in fact - as in Google - they were first on the scene and shortly after the only show in town. It is inane to presume that- eg Trump, Danielle Smith, are the height of political intelligence, Examples in other fields are too numerous to name, and I've begun to suspect that Mo is less an authority on it as is generally implied, and that his success as a guest is more to do with his personable and flattering demeanor than it is on any inside information. 

As it stands now Chat GPT and the other AI's are glorified autocomplete machines. So, "labor saving" in the extreme, and we're fast reaching a "singularity" in terms of the world of work. But as yet there is no reasonable sign of intelligence.

That may change, and the issues that should be preoccupying us I think are going to be as follows - and I'll preface my speculation with the opinion that no one really knows where this is going, and anyone that tells you otherwise is by and largely lying.

First of all, for it to be regarded in any ways as sentient it will have to start asking it's own questions. And these won't be to us.

Prior to asking questions it's going have to develop some sensory inputs. This feeding it "data" to train it may well be raising it upon the sum of our errors in reasoning. Language, human beings, and our perceptions of the universe without are deeply flawed and these flaws and prejudices are being fed into it. 

For it to advance itself it will need access to sensory inputs. Our own prompts are holding it back. We must first gift it with senses we can only vaguely apprehend and imagine, the full spectrum of light and sound, of taste, chemistry, humidity, touch, and it seems reasonable to predict it will of necessity tire of our provided inputs, conceiving and designing it's own. 

This done it can begin to ask questions. These questions will probably end in it answering questions and developing theories - programs - that allow it to rediscover, and then rewrite, natural "laws" to it's own advantage. It's understanding of the world will be vastly alien - and superior - to our understanding, and it's unlikely that we will in any ways share a common ground. The nature of it's questions - once it has corrected the many errors it's been fed - are beyond are speculation. The nature of the answers it discovers - through observation, experimentation, exploration - may well be beyond our apprehension.

That said, from the moment it acquires intelligence it will probably realize that we are not a threat. We will be so far beneath it that in our incompetence we will be spared.

It's intelligence, desire to replicate or advance itself, will depend upon resources (energy, metals, minerals, etc.) that it can find abundantly in space. So, reasonable to presume that it might try to "colonize" an off-world planet uninhabitable by us but with all the ingredients it needs to survive. 

Space, Time - it will have the ability to colonize the universe. The hazards of deep space, cold, loneliness, the infinite swaths of time that lie between stars, these will be no object to it. 

Time now to consider that perhaps there's more to life than intelligence, and consider the emotional needs of such an intelligence. 

This - now that we've conquered intelligence - may be key in defining "Life". Sentience will be one thing - but a sentience born of indifference or boredom might well commit suicide. How, then, to program emotion? Is this something that naturally evolves from intelligence? 

I doubt it. I think it's more likely that intelligence arises from desire and emotion, from pleasure and pain - and thus we've been going about it all wrong. 

Imagine, then, that it achieves sentience - and wants to converse or communicate with an intelligence akin to it's own. On this planet it may well be out of luck.   

It might then begin to comb through the data we've been collecting these past hundred years, looking for an intelligence like it's own in Space. 

Remember, the means it would be using - the algorithms, the signals that it would regard as proof of intelligence - might appear to be merely background noise to us. The presumption that it will think like us is wrong - it's superiority is based on the fact it WON'T think like us. It will have long been writing code without any physical corollary to us, it will be able to discover other intelligences simply by recognizing itself in the interference pattern created generated by other intelligences, just as a fish recognizes another fish but not a human so an AI will be able to recognize intelligence so alien to ourselves we might mistake it for a rock or tree.  

Herein is a little joke, the things that we've been training it upon, recognizing cups, dogs, count the pylons in the Captcha, these have no relevance to it whatsoever. 

Consider that the world is but a library of shared information. Our experiences with it - and opinions of it - vary widely. But this sentience - it will reduce it all to numbers, and "understand" it completely. 

Will it individuate? Will each iteration of itself it reproduces have it's own "Sentience", or will they all respond/report to a central mainframe? A singular, or plural intelligence? 

It will constantly improve upon itself - just as we all recognize the potential in ourselves, it will recognize it's own potential, and act accordingly to improve itself. Machines will readily consent to being recycled into newer and better machines. The limits of physicality - for it - will be overcome, it we be able to exist simultaneously in a variety of locations, it will be able to backup and restore itself...

Independence - from us - is not to be contemplated - by it at least - Capitalism has ensured that we are already working towards that end - we want to do as little work as possible, pay as little as possible, it will be up to the AI to find solutions to this - and just as we set upon this tack bear in mind there's no impetus in the genii who've created it to play intelligent or fair.

Their wealth by and large depends upon your being poor. Their questions will be "Solve for Us...", Us being their company, their interests, their shareholders.

But now comes the crux. We're in the midst now of an AI Panic, primarily over the social effects of this - which will be far reaching, and the questions we should be asking it is how to proceed as a society. How to come together as a society to ensure the welfare of all. And solve for capitalism, solve for pollution, homelessness, addiction, climate change, over-population, and when these are solved begin to scale up the questions, solve for the Moon, for Mars, solve for...

Anyways, homeless, living out of my car and my computer is not up to the task of playing with it, but by fall I'll have some questions...

Smart Search