Technologies or tools can be used for good or not-good, and it's wise to ensure the good succeeds.
As an engineer and systems analyst I regularly "sound out" thinking, without being committed to action. Brain storming ideas doesn't rush in where others fear to tread, but sorts out pros and cons from sometimes ambiguous information, for informed decision making by everyone.
I'm avoiding any study that would be controversial, and cause us to become meat in the sandwich. I build my house on the rock rather than on the sand. And, not having needed teaching/data, (due to contested ownership), the most i can aspire to is to apply reductionism to computer assistance software to learn it's strengths and weaknesses. Companies like Google Deep Mind, Boston Dynamics, and other robotics researchers particularly Sophia, the Alpha Go playing software, IBM medical diagnostic software, vehicle auto driving software, etc are leading the way for such matters and we need a Christian assessment of any dangers inherent in it, without the pot calling the kettle black due to anxiety about it all. Many believe the approaching computer Beast will come from those type of businesses so it's high time we analyzed the topic.
The purpose of critiquing AI is to train it for positive goals. Like all tools, AI can be used for good or bad, and by critiquing it we put it in a feedback loop, which ensures reality checks make for well informed decisions and improving rigor. Like books on Amazon, the associated reviews by students make for an overall outcome. Knowledge may result in wisdom if it stands up to peer reviews, and the tests of time.
The difference between how our brain works, consciousness, instincts, intuition, inspiration, personality (which differ for identical twins/triplets though they have identical DNA), etc is the reality check of those real life brains versus augmented or virtual reality, like Tesla brain implants are pursuing, and like Sophia the android or Boston Dynamics robots currently embody.
The supernatural origins of humanity is deduced from the frontiers of quantum physics, angels, and Near Death out of body Experiences NDEs (see video Daniel Ekechukwu - below, and video Ian McCormack Glimpse of Eternity). Because belief in the frontiers of quantum physics, inter-dimensional near death experiences, our body/soul/spirit, and angels set a precedent for belief that the multi-dimensional Cosmos was also created beyond our natural / materialism's laws. In response to Richard Dawkin's statement: "DNA neither cares nor knows. DNA just is." River Out Of Eden. p. 133 : There's questions such as : Where did all the information come from for DNA/RNA, what is energy and Life, how does quantum physics apply to DNA, what constitutes intellect / mind / personality - as is purported to also be required of angels, and that of our own body / soul / spirit, why don't identical twins have identical personalities - is it due to copy-number-variations even from their earliest age and for every pair of twins on Earth, what are instincts intuition and inspiration, what is imagination and choice, how do DNA and quantum physics give rise to thoughts and what is concentration, what is language and why does it differ so widely, how do angels employ these things, how does the brain work and how does quantum physics apply to the brain, quite a lot of profound mysteries that evolution glosses over.
Have you considered the best video Daniel Ekechukwu ? Daniel Ekechukwu Reports
The Dawn of Artificial General Intelligence - Can We Survive ...
OpenAI GPT-3 Good At Almost Everything 9/8/20
Deep Learning for Dummies pg 22 "deep learning solution turns out to be way too slow for this particular need", like a real time autonomous autopilot of a vehicle, when there are many aspects to be accounted for.
Acting humanly: When a computer acts humanly, it best reflects the Turing test, in which the computer succeeds when differentiation between the computer and a human isn't possible (see http://www.turing.org.uk/scrapbook/test.html). Deep Learning For Dummies pg 12.
Simple algorithms are preferred so as to limit the possibility of errors creeping in to the results, as complexity increases, cf Deep Learning For Dummies pg 33 on Occam's Razor.
Deep learning is one form of machine learning, and of AI. It aspires to provide output by generalizing from specifics to the general - classification. This is similar to computer modeling known as virtual reality. The best VR systems are distinguishable from real life situations being modeled, even when they include AI like modern computer games do. The most we can hope for is, computer assisted, augmented reality AR, like the AR glasses and robots displayed on youtube. We're happy with narrow AI that augments our endeavours, but struggle to realize general AI, let alone super AI. This is said to be highlighted by the numerous algorithms needed to hopefully fit different data to a computer AI model. Deep Learning For Dummies pg 32, "The last requirement is the most important because there are no hard-and-fast rules that say a particular algorithm will work with every kind of data in every possible situation. If this were the case, so many algorithms wouldn't be available."
Is intelligence deterministic (like a calculator) or nondeterministic, or both to some extent ? Can a Loss Function with feedback to, upto 8 million electronic, neurons mimic choice for about 86-100 billion neurons of the human brain ? https://www.google.com/search?q=defining+a+loss+function+neural
10 differences between artificial intelligence and human intelligence Sabine Hossenfelder 8/8/19 https://youtu.be/fxiHM11w-rk
Building a More Brain-like Computer Bloomberg 2/11/19 Our best supercomputers pale in comparison. AI needs thousands of examples to train for classifying a topic, like the pictures of a cat, but it is thought that children only need a few examples to recognise another picture of a cat.
Neuromorphic computers have about 8 million neurons but the adult brain has about 86-100 billion neurons.
Intel's neuron-based AI chips could drive a car. Engadget August 2019 Limited applications seem to orient around pattern recognition. https://youtu.be/dzvTa7l_mi4
"What is the neural code, exactly?
It's what the brain uses to represent information. An example is a neuron. You can think of a neuron in many different ways. It's a very complex cell but a common level of exploring the neuron is at the level of the action potential. You can think of it like a transistor in a silicon circuit. A transistor is a binary representation, zero or one. A neuron is similar. It has some complexity to it and more nuance, but it's either firing or not firing. You have roughly 86-100 billion neurons, and they're all either firing or not firing in an exact moment of time. "
A.I Supremacy 2020 | Rise of the Machines - "Super" Intelligence Quantum Computers The 5th Kind Aug 24, 2019 https://youtu.be/nvPDEK776qo
*** https://youtu.be/fxiHM11w-rk for shortcomings of real AI computers.
Googles New Quantum Computer - Is It The Sign of End Times? Open Your Reality Oct 2019 Google claims its quantum computer can do the impossible, 10,000 years of supercomputing calculations in 200 seconds.
(https://youtu.be/fxiHM11w-rk for shortcomings of real AI computers.)
*** Google Claims Quantum Computing Breakthrough NBC News Oct 2019
(https://youtu.be/fxiHM11w-rk for shortcomings of real AI computers.) https://youtu.be/dzvTa7l_mi4 too, Intel's neuron-based AI chips could drive a car. Engadget August 2019 Limited applications of practical AI computers seem to orient around pattern recognition.
*** Google Oct 23 2019
Demonstrating Quantum Supremacy
for shortcomings of real AI computers. https://youtu.be/dzvTa7l_mi4 too, Intel's neuron-based AI chips could drive a car.
Limited applications of practical AI computers seem to orient around pattern recognition.
We know that neurons are built by DNA, as the more fundamental elements, and that also relies on quantum physics, so there is much more to how the brain works than higher level neural networks alone, possibly a soul or "heart" as it's sometimes called.
Artificial Superintelligence - Why It's Already Too Late - 2019 Facts Team Hack Life https://youtu.be/mKa2AuV9qbs
But what is a Neural Network? | Deep learning, chapter 1 3Blue1Brown series https://youtu.be/aircAruvnKk
Elon Musk: Neuralink, AI, Autopilot, and the Pale Blue Dot | Artificial Intelligence (AI) Lex Fridman 13/11/19 Summary of key issues in the fields.
Machine Learning: Living in the Age of AI | A Wired Film July 2019
*** Applications include pattern matching or recognition, speech recognition and translation, optimisation, vehicle autopilot and route finding, cypher breaking, Chess, Go by Alphago, Checkers playing, motion capture and fakeface - mapping our face onto a creature face, facial/fingerprint recognition/identification, natural language processing, transactional AI for Amazon.com, robots like Sophia by Hanson robotics, and others from Boston dynamics including robot Atlas that can jump jog jump over obstacles do a back flip or get up from a fall or do a forward roll with agile human like locomotion, Asimo walking or fast-walking talking or jumping robot by Honda, medical science diagnosis, accumulating experience from many objects and with training from numerous historical examples,
* Robots : Top 10 Most Amazing Advanced Robots That Will Change Your World enrigue8 2018
* Is Atlas The World's Most Advanced Humanoid Robot With Artificial Intelligence Inventions World Oct 12 2019 results in the Atlas video were cherry picked, as the robot often crashed.
* New Google AI Can Have Real Life Conversations With Strangers Tech Insider 2018
*** Will Self-Taught, A.I. Powered Robots Be the End of Us? World Science Festival Mar 8 2019
Current AI results from supervised learning - from numerous examples, and much less about self taught learning. Controversy still exists about true creativity for AI. Narrow AI (ANI) - isn't controversial, General AI (ArtificialGeneralIntelligence), Super AI (ArtificialSuperIntelligence) - not yet possible.
Artificial intelligence & algorithms: pros & cons | DW Oct 2019
Google AI talks about having transparency. Medical diagnosis of cancerous lesions trained the neural network using a couple of hundred thousand xray images. 26':30" Google @Home has a chip that listens for 'ok Google' or 'hello Google' then switches on the microphone's general listening, so privacy is safeguarded. 32':38" Intuition or gut feeling for deciding on a situation is very hard (impossible) to program in to the computers. 32':40" It often works in a simple lab environment, but in real life situations the algorithms are totally out of their depth, not that that bothers promoters or advertisers. 33':30" the sensors of a test autopilot vehicle were overtaxed by a car parked by a kerb. 34":16" people often underestimate the technology required to make your car autonomous under every circumstance/situation. Driving is not as trivial process as you might think, and that's because you have to constantly watch what's going on around you. The fully autonomous car is a distant dream, but driver assistance systems are already making our roads safer. Pedestrians, cyclists, trucks and vehicles making 3 point turns, jay-walkers, vehicle crashes, all complicate the reality checks of desired driver assistance versus fully autonomous systems. 39':02" Once you have multiple dimensions it becomes not obvious what the right thing to do is. 41':45" Driverless cars aren't yet ready for the road, and ethical questions still abound (choice/s to make in life and death crashes).
SU Global Summit 2019 | AI and Machine Learning Implications Singularity University 12/11/19 | Neil Jacobstein https://youtu.be/9nqBWyZWNX0
Microsoft invests $1 billion in OpenAI, vows to build AI tech platform of 'unprecedented scale' Other massive investment too.
*** Deep Learning Weaknesses
Thus Far (5':23")
* Works well as an approximation, but it's answers often cannot be fully trusted. (having vulnerabilities)
* Data hungry
* Shallow, with limited capacity for transfer
* No natural way to deal with hierarchical structure
* Difficult to engineer robust systems
* Not sufficiently transparent
* Struggles with open-ended problems
* Presumes a largely stable world, in ways that may be problematic * Not well integrated with prior knowledge
* Cannot inherently distinguish causation from correlation
On a chart of Y=Value and X=Difficulty
Humans can do it fairly well, ML not.
ML can do Descriptive Analytics well (Hindsight - What happened?), Predictive Analytics well (Insight - What will happen?), but Prescriptive Analytics is difficult (Foresight - How can we make it happen?).
20':16" AI comes with tradeoffs, not likely to usher in a utopian society.
* Artificial intelligence and its ethics | DW Documentary Aug 2019
Thomas Metzinger is de policy analyst.
Talking with a Replika chatbot is like putting a bandaid on, but to help with depression or anxiety. Robocalypse is a fear of doubters, about managing robots.
*** The danger of AI is weirder than you think | Janelle Shane 14/11/19
Too few parameters, or a limited data set, or confused information - including in pictures, for self taught AI, can result in reaching a goal by weird unacceptable unconventional means. Technically, the AI does what it is asked to do, but engineers asked it to do the wrong thing, due to mixed relevant and irrelevant data. It's the age old problem of communication, and discerning intent.
Safety engineers try to harness the power of the technology combined with the wisdom of how to manage it safely, without becoming slaves to it.
* Technological Singularity - Artificial intelligence (AI) Insane Curiosity Sept 2019
* Gradient descent, how neural networks learn | Deep learning 3Blue1Brown series
Taming The AI Beast.com anonymous womd or targeting of undesirables.
Can we build AI without losing control over it? | Sam Harris TED 2016
How will quantum computing change the world? | The Economist Aug 2019
* The Real Reason to be Afraid of Artificial Intelligence | Peter TEDx Talks 2018
* Machine Learning Zero to Hero (Google I/O'19) TensorFlow May 2019
* Quantum Computing:It's Not Just the Qubits Coding Tech 2018
* The Singularity 2020 | Rise of the Machines Ancient Astronauts Oct 2019
MIT Deep Learning Basics: Introduction and Overview Lex Fridman Feb 2019
*** The Rise of AI Bloomberg 2018 https://youtu.be/Dk7h22mRYHQ
Historical pioneers of AI. Geoffrey Hinton, of the university of Toronto, was hired by Google.
Tesla's semi-autonomous driving system, kicks in when driving conditions are right.
20':00" Professor Yoshua Bengio not concerned about technology running amok. The Terminator scenario, I think, is not very credible. And I also believe that if we're able to build machines that are as smart as us, they're also smart enough to understand our values and to understand our moral system and so act in a way that's good for us.
(Tools can be used for good or evil, it would be naive to think armies won't press them into serving their agendas).
I think there are real concerns, which is essentially the misuse of AI, to influence people's minds. It's already happening with political advertising. Yeah, we've already seen like the stuff from Facebook. So I think we should be careful about this. 20':32" And try to regulate the use of AI, in places where it's morally wrong or ethically wrong. I think we just, we should just ban it and make it illegal.
21':08" Yoshuas students have a company, Lyrebird, that can clone your voice using AI, then type in other words that are replayed but with your voice. This technology seems sweet, but lends itself to all manner of trickery. Lyrebird believe it's best to have the technology, but just publicize it's existence so the public will be on guard. 28':50" Professor at University of Alberta Rich Sutton has a different approach than just feeding the AI reams of data, his approach is called reinforcement learning. It's what animals and people do. You try things, (trial and error) and things which don't work you stop doing them. All you need, the computer has to have a sense of what's good and what's bad. So you give it a special signal called a reward. If the reward is high that means it's good, if the reward is low that means it's bad.
Reinforcement learning beat the world champion Go player. A feat previously thought impossible for a computer. Soon it could read your brainwaves (Elon Musk's neural link), and determine if you're unwell. Sutton is trying to recreate human intelligence, he sees reinforcement learning as the path to what futurists call the *** singularity. The moment when our AI creations light up and surge past human level intelligence (techno-eugenics?/progress).*** 31':29" Sutton says, the date for the singularity is quite a broad probability distribution, and the median is at 2040. The rationale goes like this. By 2030, we'll have the hardware. So give guys like Sutton another 10 years to figure out the software to go with the hardware to do it. George Dvorsky is a writer for Gizmodo and an AI philosopher. 39':28" Since we're in the apocalyptic bar, what is the con case around AI? What's the nastiest scenario that everybody is worried about? Unfortunately , there is no shortage of nasty scenarios and I think this is what makes artificial intelligence such a scary thing, is all the different ways that it can go wrong. It can be everything from an accident, where we just didn't think it through. We gave a very powerful computer instructions to do something. We thought we explained it articulately, we thought we gave it a concrete goal and it 39':51" completely took a different path than we thought it would, in such a way that it actually caused some great damage. He gives an example of asking a robot for more paperclips, and the robot goes about converting all material objects into paperclips. Before you know it, we've converted the entire cosmos into paperclips. It's a crazy scenario, but it's an illustrative scenario. We can't be dismissive of the perils. I think that's exceptionally dangerous. 40':29" And I don't think it's too early to start raising alarm bells about it.
* A few basic functions are very commonly used. The mean squared error is popular for function approximation (regression) problems [?] The cross-entropy error function is often used for classification problems when outputs are interpreted as probabilities of membership in an indicated class.
? Page 155-156, Neural Smithing: Supervised Learning in Feedforward Artificial Neural Networks, 1999.
*Almost universally, deep learning neural networks are trained under the framework of maximum likelihood using cross-entropy as the loss function.
Most modern neural networks are trained using maximum likelihood. This means that the cost function is [?] described as the cross-entropy between the training data and the model distribution.
? Page 178-179, Deep Learning, 2016.
* The maximum likelihood approach was adopted almost universally not just because of the theoretical framework, but primarily because of the results it produces. Specifically, neural networks for classification that use a sigmoid or softmax activation function in the output layer learn faster and more robustly using a cross-entropy loss function.
The use of cross-entropy losses greatly improved the performance of models with sigmoid and softmax outputs, which had previously suffered from saturation and slow learning when using the mean squared error loss.
? Page 226, Deep Learning, 2016.