top of page

Prophecies Happening NOW

Knowledge shall increase

Knowledge shall increase – It literally means to multiply on a scale exponentially. Well, that's what's happened with knowledge! It's growing exponentially out of control. We are immersed with so much knowledge today that we take it for granted. We’ve got more knowledge than we know what to do with. It's happening exactly like Daniel says – exponentially. It's multiplying out of control. It is exponential curve.


  • The basic newspaper today has more information in it than a person would come across in their entire lifetime in the 17th century - and we don't just have one newspaper, right? We have thousands of  hem at our fingertips.

  • A normal watch today has more computing power than the original lunar lander that was on the moon

Technology Increase.jpg

Technology Exponential growth

  • It used to take one full century – 100 years – for man's knowledge base to double

  • We are now at a point where knowledge is doubling every 11 to 12 hours.


It's spiraling out of control! God said that in the book of Daniel 2,600 years ago. This is a sign that you're in the end of times – and it's not a good sign. Meaning it's called the end, not the beginning of a great time. It's the end of time.

Revelation 11 says that the whole world will watch the two witnesses in Jerusalem. In Revelation 13, the Antichrist must have access to technology that will allow him to appear to be omniscient, omnipotent, and omnipresent.  The technology to do all of that is here now.

Billy Crone is far and away the world's leader in Technology in Prophecy. And all of his studies are available for free on his app.  I can not recommend these enough:

The ultimate sign that you're headed for the worst day of your life would have to be this...You wake up one morning only to realize that your family has suddenly disappeared. So you run to turn on your TV to see what's happening and there you watch a special worldwide news report declaring that millions of people all over the planet have simply vanished. As you spy the Bible on the coffee table, it suddenly dawns on you that your family was right after all when they kept telling you about the rapture of the Church. Then to your horror, you realize that you've been left behind and have been catapulted into mankind's darkest hour, the 7-year Tribulation that really is coming upon the whole world. But thankfully, God is not only a God of wrath; He's a God of love as well. And because He loves you and I, He has given us many warning signs to show us that the Tribulation is near and that His 2nd Coming is rapidly approaching. Therefore, The Final Countdown takes a look at 10 signs given by God to lovingly wake us up so we'd give our lives to Him before it's too late. These signs are the Jewish People, Modern Technology, Worldwide Upheaval, The Rise of Falsehood, The Rise of Wickedness, The Rise of Apostasy, One World Religion, One World Government, One World Economy, and The Mark of the Beast. Like it or not folks, we are headed for The Final Countdown. Please, if you haven't already done so, give your life to Jesus today, because tomorrow may be too late!

Your cell phone is listening to you, monitoring your every conversation. You're being photographed everywhere you go by thousands of surveillance cameras. Facebook and social media know who you are, what you look like, what you buy, and where you travel. Microchipping parties are now the rage as people discover how easy it is to go cashless and just swipe their hand across a scanner. Want a Hologram girlfriend who calls you at work and sends you texts throughout the day, just like a "real" relationship? Sweden has gone almost fully cash free. Even 3rd world countries like Vietnam, Nigeria and Tanzania are going cashless!  Are you really driving your own car? Or can today's cars be hijacked by hackers while you're driving down the road? Smart cards, smart watches, smart phones, smart cars ... lots of really smart people have bought into the Mark of the Beast technology, coming soon to a "chipping party" near you! Gary Stearman and Billy Crone discuss the Age of Technology and how this will one day lead to global enslavement of the world's population. A scary journey into the future awaits, prophesied 2,000 years ago in the Holy Bible! How did the Apostle John know what was coming when he was isolated on the Island of Patmos? You know the answer!

Pastor Billy Crone reveals how modern technology is not the friend you think it is. It is invasive and is preparing the way for the Antichrist’s kingdom. We have no place to hide.

Technology & Culture Reset (Christ In Prophecy)

How will the Great Reset be achieved technologically and how will it affect culture worldwide?

Fearfully and Wonderfully Made

Fearfully and Wonderfully Made :: By Jennifer Laufenberg

When Christians Partake in the Sins of Science and Technology

“Beloved, believe not every spirit, but try the spirits whether they are of God: because many false prophets are gone out into the world” (1 John 4:1).

“For do I now persuade men, or God? Or do I seek to please men? for if I yet pleased men, I should not be the servant of Christ” (Galatians 1:10).

“O foolish Galatians, who hath bewitched you…?” (Galatians 3:1).

While giving a podcast on the Solari Report, Harry Blazer remarked that technologies such as artificial intelligence and brain mapping are “not benign.” As such, we should ask ourselves who is behind these. Who, in fact, is in control of these tools that will shape us? Blazer asked, what emotional and spiritual intelligences do these people hold who desire a centralized power? They have a plan for what they want people to be, he said.

That podcast was from 2018, one year after The Wall Street Journal (WSJ) ran a short article on Facebook’s (now known as Meta Platforms, Inc.) Building 8. At the time of the article, this “building” was rather new, but more importantly, this obscure division was working hard on developing brain-computer interface technologies. In short, exploring the possibilities of connecting our magnificent God-created brain to machines and studying the outcomes in what the head of the division at the time, Regina Dugan, called “audacious science.”

Now, in 2021, these so-called advancements in the field of neurotechnology have not faded into another galaxy. More are making an appearance, quietly shrouded under the guises of education, business, and healthcare. When complications like ethics arise, the concerns revolve around a worldly trinity: access, equity, and privacy. In the WSJ article, “How to Get Smarter: Start with the Brain Itself,” [1] Robert Hampson, a neuroscientist with Wake Forest Baptist Medical Center, said, “If there were a prosthetic that improved a person’s memory, is that an unfair advantage?”

In the age of the Internet of Things (I0T), a collection of all devices (think the ubiquitous cell phone, Apple Watch, or, who would have guessed, the smart garage door opener) that are connected to the internet and form a mass collection and distribution of our data, Mr. Hampson’s question of fairness bypasses the most foundational level of our purpose. This technology that promises to “alter how the brain functions, and ultimately our sense of self,” should be questioned, then, on the most basic level of Genesis 1, “In the beginning, God created….”

The Internet of Things has driven the increasing digitalization of our world, thus providing life to a more artificially constructed world through 5G, augmented reality, and virtual reality. The problem, then, begins not with the devices but with the concept of connecting the minutest aspects of our life through tiny sensors that transfer our data (our actions, thoughts, and emotions) throughout a space whose owner we know not. Marketed and sold to us as essential for planning, health, learning, and security, we simply have and continue to buy in. At what price?

For Christians, the desires of science and technology to alter God’s most precious creation—human design—should be alarming. Moreover, when the technologies are largely funded by Pentagon’s DARPA (the Defense Advanced Research Projects Agency), we should indeed be asking the kinds of prescient questions that Harry Blazer suggested back in 2018, back before our Covid world and the promise of the metaverse. Yet, I believe that, for too long, we born-again Christians have allowed Jesus’s admonition that we are to be as wise as serpents and as harmless as doves (Matthew 10:16) to lull us into a naive tranquility whereby we pay lip service to the work of wolves, or put such strict limitations on the breadth of it, that it can only apply to a handful of vices regularly preached against.

The parable of the mustard seed in the Gospel of Matthew 13:31-32 is applicable to the ever-expanding Internet of Things (I0T). What many don’t understand about the parable is that the mustard seed’s explosive growth is undesirable, leading it to become a habitat for sin, as represented by the birds and evidenced by leaven in the following parable. In regard to the vast Internet of Things, we might wonder if all that’s waiting to be connected is, well, you and me. With artificial intelligence and the metaverse, the possibility is here. It’s being marketed as something to be envied and something to be had for many. Let’s look at these technologies that are and will be subtly sold to His creation who bear His image, promising to propel us into an altered and bettered state.

Artificial Intelligence

Dr. Hassan Tetteh is the Warfighter Health Mission Chief for the Department of Defense Joint Artificial Intelligence Center. In the WSJ article, “How AI Will Make Your Doctor Smarter,” [2] he explained, “What the X-ray or anesthesia were to medicine generations ago, AI is today in its essentiality. You can call me an evangelist, a proselytizer, or a broken record—it’s all fine. Vladimir Putin had it right when he predicted whichever nation leads in artificial intelligence will dominate. In the arms race for healthcare AI, I want to make sure the United States doesn’t come in second.”

At its simplest level, artificial intelligence extracts human intelligence and inputs it into a man-made creation. Our knowledge, our activity, our data is the foundation upon which artificial intelligence is built. Then, it operates as the replacement for human intelligence. Keep the spiritual in mind: Only God can create from nothing. Satan can only mimic God, as he lacks the ability to create from nothing. Furthermore, Jesus told us how Satan operates: stealing, killing, and destroying.

This “arms race for healthcare AI” is not limited to the military. It extends into the civilian healthcare system and the education system as well. In this Covid world, AI could be deployed to detect and diagnose your cough—over the phone via a smartphone app. Though AI is still undergoing schooling to detect and diagnose a cough, once it is sufficiently learned, your smartphone will be another listening ear in the doctor’s office, hospital, or nursing home to indicate an increase in illnesses—or, even in your home, reporting back to your doctor whether your illness is improving, worsening, or in need of further evaluation.

From a listening device to a linking device, the Defense Department has invested $18 million dollars to study wireless brain connections that will enable the transfer of one person’s words or images to another person’s brain. This research, led by Dr. Jacob Robinson, a neuro-engineer at Rice University, relies on a combination of technologies, including nanotechnology, magnetic stimulation, and genetic engineering, the latter of which “alter[s] neurons.” Once again, the social contribution of linked brains falls under the parameters of health, as in helping Alzheimer’s patients, but the potential scope is wider, encompassing the classroom, workplace, and society at large:


“I see a day,” said Dr. Robinson, “when people will be connecting to their brains in the workplace and in the classroom even in social situations.” [1]

Though these neural links are described as “noninvasive” (currently a wireless headset is used), it is possible that brain connections could evolve into what journalist Robert Lee Hotz called an “internet of thoughts.” Similar to the IoT, neurons would be connected to a cloud, “providing access to supercomputing storage and processing capabilities and artificial intelligence systems” [1]

From a wireless headset to an implant, DARPA’s Biological Technologies Office has been underwriting research for memory implants with the end goal as a “wireless memory prosthesis” available for anyone learning a new task, from the learning impaired to the ordinary procrastinator. While the technology may make it a boon to college students during finals, the science behind it and the future of such science is deeply disturbing. The formation of a memory implant has come from the scientists’ ability to understand our own ability to remember. From there, according to DARPA, scientists were able to “‘write in’ that code to make existing memory work better.” [1]

Elon Musk’s Neuralink is researching a brain implant that allows a person to control a computer or cell phone through an implant, or what they call the Link. Musk’s research relies on the same foundation that Facebook’s Building 8 was doing in 2018, brain-machine interface systems (BMIs) or brain-computer interfaces (BCIs). Again, the basic premise is to enable communication between the brain and an electronic device (a cell phone or computer), which will allow the device to receive information from the brain as well as input information into the human brain itself. Neuralink’s Link, however, is designed to be “fully wireless communication through the skin.”


Though Neuralink is marketing this technology for individuals with spinal cord injuries, one must question whether or not this is ultimately an exploitation of the final frontier: the mind. If all of this seems conspiratorial, think again. In 2015, citizens in Europe, first in Sweden and then Germany, began willingly getting microchipped as a matter of convenience. Why carry keys and a wallet when you can just swipe a hand? So, what’s changed in six years? In 2021, is Neuralink’s vision outrageous or right on the money “to create BMIs that are sufficiently safe and powerful that healthy individuals would want to have them?”

Because AI is heralded as vital to the future of business, healthcare, and education sectors, there’s little restraint for its advance. Furthermore, with its ability to mimic human voices, needs, and desires, AI will have a central role in the internet’s newest successor: metaverse.


Predicted to be a trillion-dollar enterprise, Er, new universe, the metaverse will combine augmented reality, virtual reality, and the internet to create what Mark Zuckerberg calls an “embodied internet” (quoted in “Mark in the Metaverse,” The Verge). [3]  Where today we browse the internet, in the metaverse we will be inside this new world—talking, shopping, dining, working out, getting together with friends, going to work. In short, it sounds like much of what we already do, except via our own avatar. And just like the life we have now, we will have purchasing power. Speculated to be the medium of exchange, cryptocurrency will enable us to outfit our avatar, purchase homes, decorate them, and continue to spend in all sorts of ways that one can only predict are utterly worthless.

Yet, the metaverse will be more than an alternate reality we zoom in and out of; in fact, we could spend most of our day residing in this artificial world. One of the appeals, Zuckerberg contends, is that the metaverse will provide a more natural experience than our current internet: “I don’t think this is primarily about being engaged with the internet more. I think it’s about being engaged more naturally.” [3]


Within this more natural place, there will be no need for real transportation, as we will be able to teleport from one place to another. Think big because the metaverse is expansive. Though we may “own” a few things with our virtual money, that will be after these brave new worlds are formed and purchased. Sound confusing? Unfair? As Konrad Putzier reports in “Metaverse Real Estate Piles Up Record Sales in Sandbox and Other Virtual Realms,” [4] investors and companies are already purchasing land in the metaverse worlds. (Collectively, these different worlds are known as the metaverse.) One company that invests in digital realty, Republic Realm, recently paid $4.3 million for virtual land in the metaverse world Sandbox.

Before the metaverse becomes our mainstay, however, Zuckerberg admits that there will likely be at least one technological evolution beyond the Oculus headset. [3] Several companies are working on ways to provide the immersive experience of the virtual reality headset but in a manner more akin to the smart watches or smart glasses. The latter, which Christopher Mims describes in his article “A View of Apple’s Future,” [5] will give a person an “overlay of [their] world with a heads-up-display that puts driving directions, messages, video chats and everything else we do on our phones directly into our field of view.” One thing seems more certain about the advances; at minimum, access to the metaverse will be through a “face-based computer.”

Though Qualcomm’s vice-president of XR and the metaverse, Hugo Swart, predicts that they are ten years from the “‘holy grail’ of augmented reality glasses.” I believe access will be far more advanced. The key is in one word: embody.

Today’s internet is addictive, but it does not embody us. A look at the meaning of embody gives context to who we may be or what we may lack in this coming metaverse. The following definitions come from Merriam-Webster:

  1. “to give a body to (a spirit)”
    2. “to deprive of spirituality”; “to make concrete and perceptible”
    3. “to cause to become a body or part of a body”
    4. “to represent in human or animal form”


Through process of elimination, there are two definitions that can be easily eliminated as pertaining to this new world: “to make concrete and perceptible” as this is already done through virtual reality AND “to represent in human or animal form.” (Joanna Stern pointed out in her article, “Stuck in the Metaverse for 24 Hours,” that her avatar floated around legless with other legless avatars.)

The other meanings “to give a body to (a spirit)” and “to deprive of spirituality” if true in the context of the metaverse, clearly delineate that this is no place for a born-again Christian. As born-again believers, we are already part of the body of Christ. Our body is not our own but has been purchased with Christ’s precious blood, and we are indwelt with the Holy Spirit.

While the metaverse is most often associated with Meta or Meta Platforms, Inc. (formerly Facebook), there is no one company behind it. Currently, Google, Microsoft, Samsung, and Sony, among others, are also contributing to this alternate world that venture capitalist Matthew Ball estimates will eventually “represent 10% to 20% of the world economy” (quoted in “Metaverse Emerges as Promising Yet Uncertain New World for Investors).” [6] This lucrative man-made universe will, according to Zuckerberg, be “operated by many different players in a decentralized way.” [3] More insidious, still, is Zuckerberg’s comment about a “creator economy,” in which there will be new work for everyone, courtesy of “individual creators designing experiences and places.”

How we will all become embodied and take part in new roles designed for us by these creators is unknown. More pressing—how will our self-will, our right to be sovereign over ourselves (an aspect of human dignity that God has given us) be taken over?

It is here that I would encourage you to think hard about the Covid injection and booster shots. Seriously research them outside of mainstream media sources. The very definition of vaccine has changed and will probably be changed again. The Bible warns of a “science falsely so-called” (1 Timothy 6:20) and of being deceived by pharmakeia. (See also Isaiah 8:19-20.) There is much written about the possible goals of injections containing liquid nanoparticles, from bioweapons, to kill shots aimed at mass depopulation, to liquid computers meant to eventually hook us all up to the internet. “Metaverse is,” as Zuckerberg explained, “a vision that spans many companies—the whole industry and thus the whole world.” [3]

If the Covid injection concerns much more than preventing an illness, then, clearly, the metaverse encompasses more than a natural internet experience. Indeed, as the definition of embodiment denotes, the metaverse will take over our lives, becoming a god, emptying us of our real God-given dreams, desires, and purposes, retooling us from being fearfully and wonderfully made into something sinister. The metaverse is another verse out of the enemy’s playbook, a satanic realm of deception and illusion, full of evil.

The apostle Paul is clear: “Have no fellowship with the unfruitful works of darkness, but rather reprove them” (Ephesians 5:11, emphasis mine). “Darkness” is used as a metaphor for those who are ignorant of “divine things and human duties” (Strong’s). Ephesians 5:13 reads, “But all things that are reproved are made manifest by the light: for whatsoever doth make manifest is light.” Here, then, is another metaphor, for God is light and the full embodiment of knowledge and truth.

In the beginning, God breathed light into the emptiness and darkness, bringing about life (Genesis 1:1-3). As His children, I believe we emanate His glory from our spirit. While it may not be perceptible to us, it is to the spiritual realm we war against (Ephesians 6). Make no mistake: this is a spiritual war. We are misguided if we believe that we can fellowship with the “unfruitful works of darkness” and be holy. Not only will we disobey God and tempt Him, but also we risk losing our personhood. Though originally created in the image of God, we will be reduced to a device, a pawn for play to satisfy the fleshly desires of the elites.

Paul’s words in 1 Timothy 6:10 are fitting: “But they that will be rich fall into temptation and a snare, and into many foolish and hurtful lusts, which drown men in destruction and perdition.”

Taken together, then, artificial intelligence and the metaverse are a blending of man and machine with a futile attempt to transform this universe and humanity. Transhumanism—this new creation—is completely opposed to the Biblical account of God’s creation, as told in Genesis 1 when the “Spirit of God [Ruach Elohim] moved upon the face of the waters” (Genesis 1:2).

I think it’s interesting that the translation of Ruach Elohim, “wind, breath, mind, spirit,” are elements associated with tangible life and life’s movement. Both ourselves and the created world are not blocks on some chain or chips connected to a cloud. Rather, our breath, mind, and spirit all come from Him, and so it should be, then, that “in him we live, and move, and have our being” (Acts 17:28, emphasis mine).

What, then, beholds the Christian who partakes in this game of “re-creation” getting started? We might consider that what may seem as good can be meant for evil. Such is the enemy’s ultimate perversion of Joseph’s understanding of being sold into Egypt. (See Genesis 50:20.) The enemy’s plans are to steal, kill, and destroy, although they are often conveniently peddled as light. Thus, when we partake as “do-gooders,” as virtual signalers, we take part in the thievery of the hopes and dreams that God has put into the hearts of His people. What we need to ascertain is not how long before these technologies become so widespread as to affect us personally, but how we will respond when they do.

Paul assures us in Romans 2:3, “And thinkest thou, this O man, that judgest them which do such things [see Romans 1:26-31 for a list of “vile affections”], and doest the same, that thou shalt escape the judgment of God?”

Take heart. Stay steadfast, Christian, for “glory and honor and immortality, eternal life” (Romans 2:3) are our inheritance through Christ Jesus—beyond which nothing in this world can mimic or steal.

Jennifer Laufenberg
December 7, 2021

Please visit us at to read about our mission and to read more of our writings. We are just getting started.



[1] Hotz, Robert Lee, “How to Get Smarter: Start with the Brain Itself,” Wall Street Journal, Aug. 12, 2021
[2] Ripp, Alan, “How AI Will Make Your Doctor Smarter?” Wall Street Journal, Nov. 5, 2021
[3] Newton, Casey, “Mark in the Metaverse,” The Verge, June 22, 2021
[4] Putzier, Konrad, “Metaverse Real Estate Piles Up Record Sales in Sandbox and Other Virtual Realms,” Wall Street Journal, Nov. 30, 2021.
[5] Mims, Christopher, “A View of Apple’s Future,” Wall Street Journal, Dec. 4-5, 2021 (pg. B002)
[6] Bobrosky, Meghan, “Metaverse Emerges as Promising Yet Uncertain New World for Investors,” The Wall Street Journal, Dec. 2, 2021

Digital Slavery: 5G, Internet of Things and Artificial Intelligence

The Technocrat’s lust for 5G and Internet of Things is so strong that they are perfectly willing to ignore all human concerns, protests and especially health concerns. However, the issue of Scientific Dictatorship, aka Technocracy, is much greater. ⁃ TN Editor

Technocracy was originally defined as “the science of social engineering, the scientific operation of the entire social mechanism to produce and distribute goods and services to the entire population…” (The Technocrat Magazine, 1938)

Planted as a seed in 1932, Technocracy has grown into a tree so big that it literally covers the earth today: that is, through the rebranding and repurposing by the United Nations as Sustainable Development, Agenda 21, 2030 Agenda, New Urban Agenda, etc.

Furthermore, it is like a hydra-headed monster with many tentacles and expressions, but we must never lose sight of the common purpose of all: kill the world’s economic system of Capitalism and Free Enterprise and replace it with the vacuous economic system, Sustainable Development.

Since Technocracy is a resource-based economic system, people like you and I are considered as mere resources on the same level as livestock on a ranch. If people are just animals who selfishly consume resources, then they must be monitored, managed and limited in their consumption.

To this end, Technocracy originally called for total surveillance of all people, all consumption, all production and all energy consumed in every activity. The outcome was to control all consumption and production. This level of technology didn’t exist in 1932, but it does today!

When the surveillance network in America (and the world) is finally functional, the command and control system will become reality, resulting in a Scientific Dictatorship that exceeds even Orwell’s Nineteen Eighty Four or Huxley’s Brave New World.

What is the last cog in the gearbox necessary to bring this about? In short, 5G!

Why? When you consider the massive amount of data that is waiting to be collected from the widespread Internet of Things, facial recognition cameras, Smart City sensors, self-driving vehicles, etc., they all lack one element: real-time connectivity.  5G solves this!


If you listen to any 2019 speech given by the CEO of Verizon, T-Mobile or AT&T, you will hear them rave over how 5G’s real-time connectivity is going to light up the Internet of Things like a Macy’s Christmas tree. You will hear the words “transformative” and “disruptive” over and over.

What’s the big deal with “real-time” connectivity? Artificial Intelligence (AI).

It is said that AI without data is as inert and useless as a pile of rocks. AI needs data to “learn” and then to take action. Up until now, Technocrats who create AI programs have had to use historical data for learning and that’s about all; forever learning but never doing.


The “holy grail” of Technocrats is to use their AI on REAL-TIME DATA. Real-time analysis can then close the control loop by feeding back real-time adjustments. This has never been done in the history of the world, but thanks to 5G, Technocrats everywhere are salivating to dive into the control business; that is, the “scientific operation of the entire social mechanism.”

Let me give you an example. Say you are an engineer and you designed and built a state-of-the-art fire truck that will revolutionize firefighting. There it sits on display for everyone to see. You start the engine and everyone is duly impressed, but still, it just sits there. Without water (e.g., the data) to pump through the numerous hoses, everyone, including yourself, can only imagine of what it would be like. In fact, your engineering dream is quite useless until you take it to an actual, real-time fire and blast away with the water cannons to douse the flames. Then you will know if you were successful or not.

Technocrats understand this. They know that 5G will fully enable their AI inventions and dreams. Unfortunately for us, they also know that it will enable the feedback loop to control the objects of surveillance, namely, US!

The Technocrat’s lust for 5G and Internet of Things is so strong that they are perfectly willing to ignore all human concerns, protests and especially health concerns.

Perhaps now you can understand how and why they are living out the old nautical phrase, “Damn the torpedoes, full speed ahead!” Risks don’t matter. Danger doesn’t matter. Collateral damage doesn’t matter.

To the extent that we citizens can nullify the rollout and implementation of 5G, we will scuttle the Technocrat’s ability to establish a Scientific Dictatorship. Truly, it is we who should be mounting the counter-attack with our own cry of “Damn the torpedoes, full speed ahead!”

Digital Slavery: 5G, Internet of Things and Artificial Intelligence

5G Will Revolutionize Internet Of Things And AI Platforms

T-Mobile lays out the real driver behind 5G: IoT and AI. Connecting everything and everybody together will permit command and control like never seen before in history. Unimaginable volumes of data will be collected, which is the life-blood of Artificial Intelligence. ⁃ TN Editor

The rollout of 5G will enable a rapid rise of IoT and AI, changing everything — again — for CIOs.

CIOs and CTOs have managed rapid digital transformation, yet even bigger change is coming. Soon. The impending rollout of 5G networks will enable a more rapid scaling of Internet of Things (IoT) and artificial intelligence (AI) platforms. This signals a major turning point in digital transformation in the enterprise, as technology leaders will be challenged to leverage these changes to boost the customer experience while protecting endpoints and data.

Enterprise CIOs and CTOs gathered recently for a roundtable, sponsored by T-Mobile for Business at the New York Stock Exchange, to discuss this significant wave of change about to crash ashore and wash over their global IT organizations.

All acknowledged that within a few years, AI platforms will routinely churn through unfathomable volumes of data generated by billions of IoT devices connected over 5G networks. As a result, organizations will have unprecedented opportunities to deliver entirely new customer experiences. Yet each enterprise will be similarly challenged to leverage the ‘big three’ (AI, IoT and 5G) to improve operations and processes and develop new products and services — all in the name of competitive advantage.

Who’s driving the change?

IoT growth projections are roughly 25 percent per year for the next several years, reaching a half trillion dollars globally within three years. Deloitte predicts the number of mobile providers launching 5G networks globally will double from 25 to 50 by the end of next year. Accenture research on the impact of AI in 12 industrialized countries found that AI could double annual economic growth rates by 2035 through changing the very nature of work.

Make no mistake: The building wave of 5G-IoT-AI is as inevitable as it is enormous. The potential for disruption and change was not lost on the panel participants.

To Eash Sundaram, executive vice president and digital and technology officer at JetBlue Airways Corp., there is little question that consumers will drive the transformation triggered by these emerging technologies. “Consider what happened with the iPhone. Consumers drove the smart phone revolution and the enterprise adoption naturally followed,” Sundaram said.

The CIO of a major enterprise communications company agreed, saying, “Consumers will pull 5G into the enterprise, without doubt.”


JetBlue’s Sundaram noted that his company’s decision to provide free high-speed internet connectivity on its flights is another example of consumer-driven transformation. JetBlue customers want, and now expect, to have a consumer internet experience while flying. JetBlue went a step further than other airlines in offering this service free of charge on all domestic flights, and is working on boosting connection speeds.

“Consumers drove the smart phone revolution and the enterprise adoption naturally followed.”

Eash Sundaram, JetBlue

Securing the mountains of data

Sundaram believes the challenge with the explosion in IoT devices on 5G networks lies in linking them in ways that augment the customer experience. With regard to security, Sundaram says his experience and his ‘glass half-full’ philosophy lead him to think that the tenacious work of security experts will keep enterprises relatively safe.

However, the founder and CTO of a fast-growing security start-up said the velocity and sophistication of attacks is growing fast, leading him to question whether this will slow the expected boom in enterprise IoT devices. He said security in this emerging IoT environment would be the responsibility of AI platforms capable of quickly noting anomalies from enormous streams of IoT and network data. But, as he and others noted, AI can be deployed to hack these devices as well.

This cautionary view was echoed by Dr. David Dodd, vice president and chief information officer of Information Technology at Stevens Institute of Technology. “I see high-speed 5G networks and IoT becoming commoditized,” Dodd said. “But rest assured the bad guys are actively figuring out how to compromise millions of new endpoints. How the enterprise will prepare for this reality and secure it will be a serious challenge.”

“How the enterprise will prepare for this reality and secure it will be a serious challenge.”

Dr. David Dodd, Stevens Institute of Technology

Expanding opportunity

Neil Green, vice president and transformation chief digital officer at Otis Elevator Co., says his company is already benefitting from IoT sensors and high-speed networking to sharply reduce elevator downtime, an aggravation to which most everyone can relate. Otis is actively mining sensor data to reduce major delays due to door malfunctions. Analyzing sensor data allows Otis to accurately predict minor failures that can lead to total shut down, thus limiting most repairs to short intervals when traffic is light.

Green also spoke of the potential to leverage anonymous facial recognition and other data to predict who is using elevators and when (millennials, business executives, high-end shoppers, etc.), thereby allowing clients to deliver highly targeted advertising. “Digital transformation is all about the data, and a lot of us are struggling to figure out how to monetize it,” Green said.

This last point suggests one more challenge for CIOs: Beyond building improved customer experience and better security, what are the products and services these technologies will enable and how can they be monetized?

5G Will Revolutionize Internet Of Things And AI Platforms

Consumer Warning: Internet Of Things Is Security Nightmare


IoT promises Utopia but delivers a security train wreck. Consumers have virtually no chance of securely setting up a Smart Home because Technocrats have totally underestimated complexity and anti-hacking security.  ⁃ TN Editor

When a major electronics firm started seeing strange documents being printed out remotely on more than 100 of its smart printers late last year, it frantically contacted the manufacturer to investigate.

The firm nervously wondered how — and why — an unauthorized third party was sending documents to its printers remotely. And worse, it feared its entire corporate network had been breached. The manufacturer immediately called in the big guns, Charles Henderson, global head of X-Force Red, a professional hacking team at IBM Security, for answers.

“Unless you believe in ghosts, you get kind of concerned when your printer just starts printing stuff out that you can’t account for,” said Henderson, who declined to name the firm for privacy reasons.

His team quickly identified the problem as a flaw in the printer’s remote access function, and a patch fixed the vulnerability.


Finding and testing for flaws and breaches in smart devices is Henderson’s specialty. “I run a team of hackers,” is how Henderson describes his role, then clarifying they are paid professional hackers who look for bugs, glitches, and malfunctions.


And with demand for smart devices, ranging from smart lights to outdoor sprinklers, surging in mainstream America, his job has gotten a lot busier.

“We’ve received roughly five times the number of requests for security testing of IoT [internet of things] devices in the last year,” Henderson said. “Growth has been immense over the last year to 18 months.”

Indeed, the soaring popularity of smart speakers, like Amazon Echo and Google Home, is starting to move the “Smart Home” into mainstream America. It’s no longer just tech geeks and phone-obsessed millennials who are scouring the tech universe for information on the next best gadget that lets them control lights, TVs, appliances, door locks, and even lawn sprinklers with a voice command or tap on a smartphone.

But all of this buzz and hype are putting pressure on smart device makers to rush their gadgets into the market while demand is hot — and sometimes, this means security features take a back seat, Henderson said. And cyber criminals are watching.

“Criminals rob banks because that’s where the money is,” said Charles Golvin, senior research director at Gartner, a research and advisory firm. “They’ll commit cyber crimes because that’s where the opportunity is.”

Some get crafty, making mock interfaces on a person’s phone that look like an IoT’s interface login to steal passwords — similar to the way thieves send fake emails to people pretending to be from credit card companies and banks.

Experts caution consumers to research carefully and move diligently when adding smart devices to their home network. “If one device gets compromised, it could be the same as allowing an attacker to plug into the entire network,” giving the criminalcontrol over all devices, Henderson warned.

Concerns about privacy and the complexity of smart home devices are two reasons fully outfitted smart homes are not likely to happen overnight, experts say.

Wanting — and actually installing — smart devices are very different scenarios with the latter requiring patience and diligent research in navigating through a costly, cumbersome and often time-consuming process.

The setup also takes time, as it involves choosing brands and understanding routers, hubs and wireless communications protocols, like ZigBee, Z-Wave, and Bluetooth, so that all of the smart devices can talk to one another.

“If you actually want to make your house do half a dozen of these things, it’s a lot of work,” said Frank Gillett, vice president and principal analyst at Forrester Research. “You need people to be patient and comfortable with working through multiple steps of instruction, and in my observation, a significant amount of the population is not comfortable or patient.”

If you own a lot of different smart devices, it will mean many different apps on your phone. The process of opening and launching an app every time you want to control a device — or remember the exact phrases to get a voice assistant to do it — can be cumbersome and annoying.

One company, Sevenhugs, simplifies this problem.

The firm’s single remote allows a person to control a home’s smart TV, lights, entertainment system, and other connected devices simply by pointing the remote at them. It means family members and guests can access all the smart devices without having to use a personal phone or launch multiple apps, said Simon Tchedikian, founder of Sevenhugs.

Ultimately Tchedikian wants to streamline content as well so that someone could ask for the latest season of the “Game of Thrones” and it would pop up without having to know and specify which streaming service, platform or on-demand service was offering it.


Beside ease of use, privacy and security are critical.

Using smart cameras can be great for remotely monitoring an aging parent or checking whether a child got home from school, but they could be intrusive and even risky if the system is hacked.

To lower risk and security concerns, experts suggest steps people should take when building a smart home.

First, buy quality brands. While some big brands, like Samsung, are leaders in smart appliances, the rest of the smart device world is fragmented, with much of the innovation coming from focused startups and midsize companies.

Some of the current leaders are Philips Hue for lights, Nest and Ecobee for thermostats, Ring for doorbells, and WeMo for light switches and plugs.

If it’s a startup, research the firm and make sure it has a strong online presence, preferably with active user groups discussing the product.

“If they don’t have a budget for an online presence, then they probably don’t have a budget for security,” Henderson said.


Second, security updates are critical. “Most technology companies are going to have vulnerabilities — it’s hard to get everything right” at the start, Henderson said. He recommends checking for patches or firmware updates on the company’s website to make sure it’s on top of security issues.

Third, create strong Wi-Fi passwords and engage two-factor authentication where possible.

Fourth, if you move into a new home, buy a secondhand car, or purchase a used smart device, always make sure previous owners’ accounts aren’t still connected to the hubs, routers and devices.

Henderson recalls selling his smart car and buying a new one from the same manufacturer. When he went to enroll his new car in the auto manufacturer’s app, he discovered his old account had not been deleted from his old convertible.

“They hadn’t revoked my access,” Henderson said. “I could have tracked down my old car using the GPS functionality, I could have unlocked it, honked the horn — I could have made the new owner of my old car think the car was possessed.”


Also, always look at all devices that are connected to your network.

“If you’ve got rogue devices connected to your network, it’s not your network anymore. It’s a shared network,” Henderson said. “If you had access to somebody’s home hub — and that hub had a sprinkler system, light switches and garage door opener connected to it, you could open their garage door, turn on the sprinkler systems, and start flashing the lights.”

Fifth, consumers need to prepare for a smart device’s failure — whether it’s because of a product malfunction or a power or internet outage.

“Turn off power to the devices and unplug the internet and see what happens,” Henderson said. “But you definitely don’t want to wait and find out that they don’t work when you’re standing outside your home trying to get in.”

Being technology, it will malfunction sometimes, whether it’s smart or not.

“Things do break,” Golvin said.

Finding someone to repair that new technology can be challenging — even if it’s made by a big brand.

When Jordan and Ben Feria of Orange Park, Florida, purchased a $4,400 Samsung Family Hub smart refrigerator in late 2017, the refrigerator portion broke down last November, even though the smart features on the outside touch-screen continued to work. The couple took to social media and a local TV station after dozens of technicians were unable to repair it. When the couple contacted Samsung, they were told there was only one authorized technician in Northeast Florida who could handle the repair — and even then, the person was unable to repair it. The couple wound up getting a refund.

A Samsung spokeswoman, Alicia Clarke, described the Ferias’ problem as a “rare experience” and said “the matter was resolved with the consumer,” and noted that the problem was related to the refrigerator’s compressor — not with the smart technology.


“While it is unfortunate that the Ferias had an issue with their refrigerator, the problem was limited to the unit’s compressor, not any of the smart technology incorporated into the Family Hub,” she said.

And finally, in this fiercely competitive and fast-changing space, many smart device makers will implode — and consumers need a fallback plan in place if they do. Even the most promising company can go belly-up without warning.

LightHouse was widely hailed as a trailblazer with its home cameras that offered 3D sensors and artificial intelligence capabilities. Its cameras could monitor with such precision that a voice command asking the app how a vase got broken earlier in the day could pull up the section of video that showed the child or pet who did the deed, said Gillett of Forrester.

LightHouse was viewed as the future. However, the company abruptly closed shop in late 2018, with a note on its webpage, titled “Lights Out” that read: “Unfortunately, we did not achieve the commercial success we were looking for and will be shutting down operations in the near future.”

“This is what happens sometimes with these cool vendors who are ahead of the curve,” Golvin said. “Bleeding edge versus leading edge.”

Consumer Warning: Internet Of Things Is Security Nightmare

5G And Internet Of Things To Create Unprecedented Surveillance

Are citizens required to passively sit by while the manacles of scientific dictatorship are clamped around their necks? More people recognize the encroachment, but not enough to slow or stop it. ⁃ TN Editor

Convenience is the sales pitch, but the real goal is control in service of maximizing profits and extending state power.

When every device in your life is connected to the Internet (the Internet of Things), your refrigerator will schedule an oil change for your car–or something like that–and it will be amazingly wunnerful. You’ll be able to lower the temperature of your home office while you’re stuck in a traffic jam, while your fridge orders another jar of pickles delivered to your door.

It’s all in service of convenience, the god all Americans are brainwashed to worship. Imagine the convenience of turning on the light while seated on your sofa! Mind-boggling convenience at your fingertips–and since you’re already clutching your smart phone 24/7, convenience is indeed at your fingertips.

It’s also about control, and as we lose control of everything that’s actually important in our lives, the illusion of agency/control is a compelling pitch. Imagine being able to program your fridge to order a quart of milk delivered when it gets low but not order another jar of pickles when that gets low! Wow! That’s control, yowzah.

The Internet of Things is indeed about control–not your control, but control over you– control of what’s marketed to you, and control of your behaviors via control of the incentives, distractions and micro-decisions that shape behavior.

The control enabled by the Internet of Things starts with persuasion and quickly slides into coercion. Since corporations and government agencies will have a complete map of your movements, purchases, consumption, communications, etc., then behavior flagged as “non-beneficial” will be flagged for “nudging nags”, while “unsanctioned” behavior will be directed to the proper authorities.

Say you’re visiting a fast-food outlet for the fourth time in a week. Your health insurance corporation has set three visits a week as a maximum, lest your poor lifestyle choices start costing them money for treatments, so you get a friendly “reminder” to lay off the fast food or make “healthier” choices off the fast food menu.


Failure to heed the “nudges” will result in higher premiums or cancelled coverage. Sorry, pal, it’s just business. Your “freedom” doesn’t extend to costing us money.

Domestic corporate versions of China’s social credit score will proliferate. Here is evidence that such scores already exist:

Everyone’s Got A “Surveillance Score” And It Can Cost You Big Money (Zero Hedge)

Then there’s the surveillance. The Internet of Things isn’t just monitoring energy use and the quantity of milk in a fridge; it’s monitoring you–not just in your house, car and wherever you take your Personal Surveillance Device, i.e. your smart phone, but everywhere you go.


If you are a lookie-loo shopper–you browse the inventory but rarely buy anything–expect to be put in Category Three–zero customer service, and heightened surveillance in case your intent is to boost some goodies (shoplift).

Heaven help you if you start spending time reading shadow-banned websites like Of Two Minds: your social credit standing moves into the red zone, and your biometric scans at airports, concerts, retail centers etc., will attract higher scrutiny. You just can’t be too sure about people who stray off the reservation of “approved” corporate media.

Your impulses are easy to exploit: since every purchase is tracked, your vulnerabilities to impulse buys will be visible with a bit of routine Big Data analysis, and so the price of the treats you succumb to will go up compared to the indifferent consumer next to you. Sorry, pal, it’s just business. Your vulnerabilities, insecurities and weaknesses are profit centers. We’d be foolish not to exploit them to maximize profits, because that is the sole mission of global corporations.

Governments access the trove of surveillance for their own purposes. Monitoring phone calls, texts and emails is only the first step; privacy as a concept and a right has effectively ceased to exist other than as a legal abstraction and useful fiction. The Dawn Of Robot Surveillance: AI, Video Analytics, and Privacy.

Longtime correspondent Simon H. recently submitted a video link on The Internet of Things as well as a sobering and insightful commentary.

Here is an overview by James Corbett of the totalitarian reach of the 5G IoT and a technocratic surveillance dictatorship. All delivered as an unavoidable facet of inevitable tech progress.

The 5G Dragnet

There seems to be an idea that the only reason we have historically had privacy, civil liberties and general freedoms is because in the past we lacked the technology to eliminate them.

The future does indeed seem to have globalist technocracy written all over it which is to be presented as a simple matter of embracing technological progress and celebrating new technological wonders. Don't think about the total surveillance taking place just marvel at the speed of your connections and the convenience of outsourcing all of your troubling personal sovereignty to machine assistants to make all of your decisions for you.

Anyone who resists this undemocratic future will be branded as a nostalgically foolish, technological Luddite. However, this new form of tech is completely different in nature to all of those that have preceded it. If we think in terms of macro and micro economics, then we can also look at current developments in terms of macro and micro sovereignties. This phenomenon is more pronounced in the UK than the US because of the sovereignty issues of the EU and Brexit.

Not only is our democratic sovereignty being eroded by supranational organization such as the EU, the IMF, the IPCC, markets and the central banking masters of the economic universe, etc., if we take surveillance capitalism, 5G and the Internet of Things (IoT) into consideration one can see that our sovereignty it is also under direct dual attack at an extreme and fundamentally personal level.

Against all of these things we are seeing extraordinary coalitions of resistance: Marxists, Anti-Capitalists, Anarchists, Austrian Libertarians and anyone of an old school left of right wing true liberalism who believe in the principles of democracy and sovereignty, freedom of speech, privacy and civil liberties.

The so called liberal progressives who support globalism and the technocracy are anything but liberal: they are imperialist totalitarians no better or less dangerous than the Nazis. We desperately need to strip them of their fake liberal and moderate claims and show them for what they truly are -- sociopaths.

Thank you, Simon. Resistance can take many forms.

One approach is to minimize surveillance by stripping out apps from your smart phone, leaving it in a drawer most of the time, and disabling wifi in all appliances and devices you buy/own. This approach isn't perfect, as surveillance is far beyond our control, despite Big Tech claims of transparency, privacy controls, etc., but nonetheless any reduction in data collection is meaningful.


More than 1,000 Android apps harvest data even after you deny permissions (via Mark J.)

Buy with cash and buy the absolute minimum. If you only buy real food--meats, vegetables, grains, fruit, etc.--you've effectively stripped out all the profit potential of our corporate overlords. Who is going to make a big profit offering you a discount on raw carrots? No one.


If your impulse buys are paid with cash, they can't be tracked. Whatever you buy in person with cash can't be tracked.


Limit your Personal Surveillance Device, i.e. your smart phone: disable its "always listening" and other capabilities; leave it in the drawer, etc.

How to Turn an Android Phone Into a Dumbphone in 8 Steps

Understand you're being played and gamed 24/7; ignore all the marketing, pitches and propaganda. Make it a habit to ignore all marketing pitches, discounts, coupons, etc. Become an anti-consumer, minimizing trackable purchases and pursuing a DeGrowth lifestyle of repairing existing items and making everything you own last rather than replace it with a new item (this is the Landfill Economy I've discussed many times, with thanks to correspondent Bart D. who coined the phrase to the best of my knowledge).


Don't buy wifi-enabled devices, and disable wifi if there are no non-wifi options available.

This subverts the value of the data Facebook, Google, et al. collect on you and sell to the highest bidder. If the data isn't useful in selling you something, then the buyers of the data will at a minimum weed the non-controllable consumers out of the data pool.


Since any deviance outside "normal" attracts scrutiny, game the system by logging a baseline of "normal" purchases and activities. Appearing minimally ordinary has its advantages. Trying too hard to leave no digital footprint is itself highly suspicious.


Advocate for digital privacy / Freedom from Surveillance and AI Bill of Rights. There is still a narrow window in the U.S. for protecting and expanding civil liberties and privacy. Here is an example of a proposed Algorithmic Bill of Rights:

Convenience is the sales pitch, but the real goal is control in service of maximizing profits and extending state power. "To serve humans" takes on new meanings in Big Tech/ Big Government's Orwellian the Internet of Things: To Serve Man (The Twilight Zone).

5G And Internet Of Things To Create Unprecedented Surveillance


There's a movement going on in the scientific community that you might not be aware of. This idea of Artificial Intelligence or A.I. has been in development for quite some time. In fact, a lower form of artificial intelligence has been in front of us the whole time. In this series you'll receive a wealth of information that proves without a doubt where this movement is going. You'll discover what it's connection to drone technology really is, and what it will all become if humanity isn't careful.

These are the days when things are happening so fast you can hardly keep up - and one of the things that you can't

keep up with because it's so fast is artificial intelligence (AI).

Artificial Intelligence is a branch of computer science that endeavors to replicate or simulate human intelligence in a machine, so machines can perform tasks that typically require human intelligence. Some programmable functions of AI systems include planning, learning, reasoning, problem solving, and decision making.

AI systems are powered by algorithms, using techniques such as machine learning, deep learning and rules. Machine learning algorithms feed computer data to AI systems, using statistical techniques to enable AI systems to learn. Through machine learning, AI systems get progressively better at tasks, without having to be specifically programmed to do so.

AI can encompass anything from Google’s search algorithms to IBM’s Watson, to autonomous weapons. AI technologies have transformed the capabilities of businesses globally, enabling humans to automate previously time-consuming tasks, and gain untapped insights into their data through rapid pattern recognition.

There are no shortcuts to mimicking the human brain. To create true AI, you need to hand feed massive amount of information every day.  We receive information through the five senses – sight, sound, smell, taste, touch.  AI information comes from each one of us – through emails, search engine databases, our purchases, buying habits, our social media profiles, our photos, everything – all that is being used. They use this big data, the “new oil” to feed AI and now we have the data to create the superhuman brain. And it's working – It's developing these crazy abilities to literally micromanage the planet.

It’s hard to believe that there is such a thing as artificial intelligence moving through society right now – and the way that it intersects with your life. When you see it with your own eyes – when you see the news clips, when you see the examples from news around the world, the scientists, the think tankers, the interviews, the military medical personnel – it's all there. These people admit what they're up to. Unfortunately, most people aren't equipped for what the ramifications are on a global basis and a lot of people in the church aren't understanding this from a bible prophecy point of view. What we're seeing right now – the rise of AI (artificial intelligence) – Is a sign we're living in what he called the end of times.

There is already a supercomputer controlling much of our world:

  • Finance

  • Military

  • Planning

  • Dating

  • Social planning

  • Agriculture

  • Banking

  • Self-driving cars

  • Robotic surgery

  • Lightning fast communication

  • Everything you can possibly think of


These are the things that you need to pull off the events of the seven-year tribulation that the antichrist is going to do. They're happening on a global basis now.


According to the Bible, technology needs to develop to such a degree that one guy can literally micromanage the planet.

  • How are they going to know when people are speaking?

  • Obeying and not obeying?

  • Whether they're for him or against him?

  • Whether they obey the command to worship him or not?

  • How are so many people going to be annihilated in such a short amount of time?

  • How is he going to control what people do?

  • How is he going to control the global economy?

  • How is he going to control what people buy and sell?


On a global basis, you would have to somehow tie in every product to every person. Well, guess what – that's already here. But again – the point is that you could have the technology to literally microchip the planet (which is already here), but think about that this on a global scale. Who's going to run the back end?  You can't hire enough people! But you can do it with AI.  And that's here right now.


And it’s being pitched to people as a cool, convenience thing, Whether people realize or not, they're already utilizing AI:

  • When you go to a search engine – It's not somebody in the background doing that database search, it’s AI

  • When you're speaking to your phone using Siri or Cortana - or you're speaking to your home device (Alexa)

  • Dating apps

  • Help desks

  • Reservations


It all sounds so good but the Bible tells us how these seemingly helpful benefits to society will be used to usher in the dark agenda of the Antichrist in the future.



AI technologies are categorized by their capacity to mimic human characteristics, the technology they use to do this, their real-world applications, and the theory of mind, which we’ll discuss in more depth below.

Using these characteristics for reference, all artificial intelligence systems - real and hypothetical - fall into one of three types:

  1. Artificial narrow intelligence (ANI), which has a narrow range of abilities;

  2. Artificial general intelligence (AGI), which is on par with human capabilities; or

  3. Artificial superintelligence (ASI), which is more capable than a human.

Artificial Narrow Intelligence (ANI) – Weak AI / Narrow AI


Artificial narrow intelligence (ANI), also referred to as weak AI or narrow AI, is the only type of artificial intelligence we have successfully realized to date. Narrow AI is goal-oriented, designed to perform singular tasks - i.e. facial recognition, speech recognition/voice assistants, driving a car, or searching the internet - and is very intelligent at completing the specific task it is programmed to do.


While these machines may seem intelligent, they operate under a narrow set of constraints and limitations, which is why this type is commonly referred to as weak AI. Narrow AI doesn’t mimic or replicate human intelligence, it merely simulates human behaviour based on a narrow range of parameters and contexts.


Consider the speech and language recognition of the Siri virtual assistant on iPhones, vision recognition of self-driving cars, and recommendation engines that suggest products you make like based on your purchase history. These systems can only learn or be taught to complete specific tasks.


Narrow AI has experienced numerous breakthroughs in the last decade, powered by achievements in machine learning and deep learning. For example, AI systems today are used in medicine to diagnose cancer and other diseases with extreme accuracy through replication of human-esque cognition and reasoning.


Narrow AI’s machine intelligence comes from the use of natural language processing (NLP) to perform tasks. NLP is evident in chatbots and similar AI technologies. By understanding speech and text in natural language, AI is programmed to interact with humans in a natural, personalised manner.

Narrow AI can either be reactive, or have a limited memory. Reactive AI is incredibly basic; it has no memory or data storage capabilities, emulating the human mind’s ability to respond to different kinds of stimuli without prior experience. Limited memory AI is more advanced, equipped with data storage and learning capabilities that enable machines to use historical data to inform decisions.


Most AI is limited memory AI, where machines use large volumes of data for deep learning. Deep learning enables personalized AI experiences, for example, virtual assistants or search engines that store your data and personalize your future experiences.


Examples of narrow AI:

  • Rankbrain by Google / Google Search

  • Siri by Apple, Alexa by Amazon, Cortana by Microsoft and other virtual assistants

  • IBM’s Watson

  • Image / facial recognition software

  • Disease mapping and prediction tools

  • Manufacturing and drone robots

  • Email spam filters / social media monitoring tools for dangerous content

  • Entertainment or marketing content recommendations based on watch/listen/purchase behaviour

  • Self-driving cars

  • Hotel reservations -

  • Google AI appointment center

Artificial General Intelligence (AGI) – Strong AI / Deep AI

Artificial general intelligence (AGI), also referred to as strong AI or deep AI, is the concept of a machine with general intelligence that mimics human intelligence and/or behaviors, with the ability to learn and apply its intelligence to solve any problem. AGI can think, understand, and act in a way that is indistinguishable from that of a human in any given situation.

AI researchers and scientists have not yet achieved strong AI. To succeed, they would need to find a way to make machines conscious, programming a full set of cognitive abilities. Machines would have to take experiential learning to the next level, not just improving efficiency on singular tasks, but gaining the ability to apply experiential knowledge to a wider range of different problems.

Strong AI uses a theory of mind AI framework, which refers to the ability to discern needs, emotions, beliefs and thought processes of other intelligent entitles. Theory of mind level AI is not about replication or simulation, it’s about training machines to truly understand humans.

The immense challenge of achieving strong AI is not surprising when you consider that the human brain is the model for creating general intelligence. The lack of comprehensive knowledge on the functionality of the human brain has researchers struggling to replicate basic functions of sight and movement.

Fujitsu-built K, one of the fastest supercomputers, is one of the most notable attempts at achieving strong AI, but considering it took 40 minutes to simulate a single second of neural activity, it is difficult to determine whether or not strong AI will be achieved in our foreseeable future. As image and facial recognition technology advances, it is likely we will see an improvement in the ability of machines to learn and see.

Artificial Superintelligence (ASI)

Artificial super intelligence (ASI), is the hypothetical AI that doesn’t just mimic or understand human intelligence and behavior; ASI is where machines become self-aware and surpass the capacity of human intelligence and ability.

Superintelligence has long been the muse of dystopian science fiction in which robots overrun, overthrow, and/or enslave humanity. The concept of artificial superintelligence sees AI evolve to be so akin to human emotions and experiences, that it doesn’t just understand them, it evokes emotions, needs, beliefs and desires of its own.

In addition to replicating the multi-faceted intelligence of human beings, ASI would theoretically be exceedingly better at everything we do; math, science, sports, art, medicine, hobbies, emotional relationships, everything. ASI would have a greater memory and a faster ability to process and analyze data and stimuli. Consequently, the decision-making and problem solving capabilities of super intelligent beings would be far superior than those of human beings.

The potential of having such powerful machines at our disposal may seem appealing, but the concept itself has a multitude of unknown consequences. If self-aware super intelligent beings came to be, they would be capable of ideas like self-preservation. The impact this will have on humanity, our survival, and our way of life, is pure speculation.

Types of AI

AI Holograms

When you combine not 3D hologram technology with AI, they can create any image, of anybody, at any age, from

baby to adult, from young person to elderly. It is so realistic you could literally see the pores in their skin and the individual hairs on it it's head. Then you combine this with AI and it's not just a realistic image of anybody you want (including the antichrist) but it would be able to speak and communicate around the world – and not just pre-scripted speech.  They can understand and communicate and react.


Mark Sagar started his career by building medical simulations of body parts. He took those skills and went into CGI, most famously for movies including Avatar, King Kong, and others. Now he's combining his skills and building an entire brain and responsive face on a computer in order to map human consciousness.

Roy Orbison - You Got It - BASE Hologram Tour (2019)

Roy Orbison hologram plays "You Got It" with live band and singers from the "Rock N Roll Dreams Tour" 

Researchers built an AI that recognizes and rewards good doggos

Other Examples

Using AI to generate 3D holograms in real-time – A new method called tensor holography could enable the creation of holograms for virtual reality, 3D printing, medical imaging, and more — and it can run on a smartphone.


A New, Artificially Intelligent Hologram Was Just Born

The next evolution of virtual concierge services will gain a physical presence thanks to hologram technology.


Deep Learning Enables Real-Time 3D Holograms On a Smartphone

New AI technique can rapidly generate holograms with less than 1 megabyte of memory

AI Holograms



“I believe that the birth of AI is just as big a watershed moment prophetic prophecy sign as Israel becoming a nation in 1948.”

– Billy Crone

Revelation 13:14-15 – The Beast from the Earth

Rev 13:14 And he deceives those who dwell on the earth by those signs which he was granted to do in the sight of the beast, telling those who dwell on the earth to make an image to the beast who was wounded by the sword and lived. 15 He was granted power to give breath to the image of the beast, that the image of the beast should both speak and cause as many as would not worship the image of the beast to be killed. 

Revelation 14:9-11 – The Proclamations of Three Angels

Rev 14:9 Then a third angel followed them, saying with a loud voice, “If anyone worships the beast and his image, and receives his mark on his forehead or on his hand, 10 he himself shall also drink of the wine of the wrath of God, which is poured out full strength into the cup of His indignation. He shall be tormented with fire and brimstone in the presence of the holy angels and in the presence of the Lamb. 11 And the smoke of their torment ascends forever and ever; and they have no rest day or night, who worship the beast and his image, and whoever receives the mark of his name.”

Revelation 16:2 – First Bowl: Loathsome Sores

Rev 16:2 So the first went and poured out his bowl upon the earth, and a foul and loathsome sore came upon the men who had the mark of the beast and those who worshiped his image.

Revelation 15:1-2 – Prelude to the Bowl Judgments

Rev 15:1 Then I saw another sign in heaven, great and marvelous: seven angels having the seven last plagues, for in them the wrath of God is complete. 2 And I saw something like a sea of glass mingled with fire, and those who have the victory over the beast, over his image and over his mark and over the number of his name, standing on the sea of glass, having harps of God. 

Revelation 20:4 – The Saints Reign with Christ 1,000 Years

Rev 20:4 And I saw thrones, and they sat on them, and judgment was committed to them. Then I saw the souls of those who had been beheaded for their witness to Jesus and for the word of God, who had not worshiped the beast or his image, and had not received his mark on their foreheads or on their hands. And they lived and reigned with Christ for a thousand years. 

Combine AI hologram with rewards/discipline for behavior and fast-forward to the image of the Antichrist and the Mark of the Beast.


Rev 13:15 He was granted power to give breath to the image of the beast, that the image of the beast should both speak and cause as many as would not worship the image of the beast to be killed. 16 He causes all, both small and great, rich and poor, free and slave, to receive a mark on their right hand or on their foreheads, 17 and that no one may buy or sell except one who has the mark or the name of the beast, or the number of his name.

Now add satellites – they’re sometimes launching them 100 at a time these days. These satellites are producing a ‘matrix’.  And now we have a 5G network with speeds that are able to create what they call the Internet Of Things (IOT).  The IOT is basically everything on the planet – from your home, to the city, to your vehicles, and eventually to the person – wherever you go, whatever you do, you will be tracked.  And it will be a mark in your right hand or your forehead that ties you into this global matrix system. Without this Mark of the Beast, you will not be able to buy and sell. 


How is one guy right going to know and have the ability to control (literally micromanage) every single product, person, and purchase? And then be able to control the permission of that person to buy or sell? The AI database, with input from surveillance cameras and satellite feeds, will know whether that person obeyed the rules, worshipped when they were supposed to, etc. For the first time in history, just in time for the Antichrist, AI can run the whole global back-end system. This global satellite network will make the Antichrist and his AI image appear to be omniscient, omnipotent and omnipresent, just like God.


AI is also running the whole global financial system – global banking, global stock markets, your personal accounts, taxes, etc.  Right now, financial systems around the world are being broken down and a Universal Basic Income (UBI) is being pushed by the United Nations (UN) and World Economic Forum (WEF).  A UBI is a government program in which every adult citizen receives a set amount of money on a regular basis.


“You will own nothing and you will be happy.  Whatever you want, you will rent, and it will be delivered by drone”

–WEF, Great Reset, 8 Predictions for the World in 2030


How are you going to run this?  AI gives them the ability to do this crazy Great Reset. Everyone will receive a Digital Identity which will be tied to a Digital Immunity Passport (proof of Covid vaccination) and a cryptocurrency system.


  • ID2020 (Digital Identity)

    •      Alliance Partners: Microsoft, The Rockefeller Foundation, Gavi the Vaccine Alliance, etc 




In an interview between Billy Crone and global bankers, they admitted it.

Billy Crone “How are you going to tie everybody into this global financial system?”

Bankers “We're going to microchip people and the microchips are going to be connected to a cashless society that we’re creating, that AI is running on a global basis. And without that microchip you will not be able to buy and sell” … “if you don't do what we say, we will shut off your chip


That's why they want to build this. This is why the movers and shakers of this world feel that they've got everything under control.  Whether they've read Revelation 13 or not, we know Biblically that this is where everything is headed.  But wread the Book and we know how it's going to end.

This is my rule of thumb – everything that we share on our videos and in the books is what I call ‘over-the-counter’ information.  This is what they are admitting in public. But we all know that that's not all there is.  What we get is soft disclosure – a little bit here, a little bit there. My general rule of thumb – Anything that they have shared with us in public (including about AI and how far it has advanced), we are literally 20 – 30 (maybe 40 – 50) years behind what they’ve actually done. And if you're this far, and you'll admit it, what do you really have. That's the kind of stuff that we need to be aware of –Billy Crone

AI is already running the communication system, banking systems, the satellites, the internet.  AI is the one that can manage all these individualized microchips of everything, including people (eventually) around the planet.  It's here now – just in time for the Mark of the Beast scenario. And most people have no clue how far it's gone. That's why we have to get equipped.  This isn't just something cool, something eclectic. This is a technology that they need to pull off the events of the Tribulation. The new world has arrived, but it's hardly the long-awaited utopia the world imagined. There is a sinister plan at work here. Some call it the New World Order, others call it the Great Reset, the Bible calls it the Antichrist Kingdom (Tribulation).

AI in the Bible


8 ways AI can help save the planet – World Economic Forum (WEF)


It’s a historic moment for Artificial Intelligence (AI). All the pieces are coming together: big data, advances in hardware, emerging powerful AI algorithms, and an open source community for tools that reduces barriers to entry for industry and start-ups alike. The result: AI is being propelled out of research labs and into our everyday lives, from navigating cities, ride shares, our energy networks, to the online world.

In 2018 everyone is starting to see the business value of AI. It is being added to more and more things every year, and it is getting smarter and smarter – accelerating human innovation. But as AI becomes more powerful, more autonomous and broader in its use and impact, the unsolved issue of AI safety is paramount. Risks include: bias, poor decision making, low transparency, job losses and malevolent use of AI, such as autonomous weaponry.

The challenge, however, goes beyond guiding “human friendly AI” to ensuring “Earth friendly AI”. As the scale and urgency of the economic and human health impacts from our deteriorating natural environment grows, we have an opportunity to look at how AI can help transform traditional sectors and systems to address climate change, deliver food and water security, build sustainable cities, and protect biodiversity and human wellbeing.

To this end, in a new Forum-PwC report launched at Davos this year, we showcase the significant opportunity to harness AI for the Earth. Here we outline eight of the identified “game changer” AI applications to address this planet’s challenges:

AI - WEF AI saves the world.png

1. Autonomous and connected electric vehicles

AI-guided autonomous vehicles (AVs) will enable a transition to mobility on-demand over the coming years and decades. Substantial greenhouse gas reductions for urban transport can be unlocked through route and traffic optimisation, eco-driving algorithms, programmed “platooning” of cars to traffic, and autonomous ride-sharing services. Electric AV fleets will be critical to deliver real gains.

2. Distributed energy grids

AI can enhance the predictability of demand and supply for renewables across a distributed grid, improve energy storage, efficiency and load management, assist in the integration and reliability of renewables and enable dynamic pricing and trading, creating market incentives.

3. Smart agriculture and food systems

AI-augmented agriculture involves automated data collection, decision-making and corrective actions via robotics to allow early detection of crop diseases and issues, to provide timed nutrition to livestock, and generally to optimise agricultural inputs and returns based on supply and demand. This promises to increase the resource efficiency of the agriculture industry, lowering the use of water, fertilisers and pesticides which cause damage to important ecosystems, and increase resilience to climate extremes.

4. Next generation weather and climate prediction

A new field of “Climate Informatics” is blossoming that uses AI to fundamentally transform weather forecasting and improve our understanding of the effects of climate change. This field traditionally requires high performance energy-intensive computing, but deep-learning networks can allow computers to run much faster and incorporate more complexity of the ‘real-world’ system into the calculations.

In just over a decade, computational power and advances in AI will enable home computers to have as much power as today’s supercomputers, lowering the cost of research, boosting scientific productivity and accelerating discoveries. AI techniques may also help correct biases in models, extract the most relevant data to avoid data degradation, predict extreme events and be used for impacts modelling.

5. Smart disaster response

AI can analyse simulations and real-time data (including social media data) of weather events and disasters in a region to seek out vulnerabilities and enhance disaster preparation, provide early warning, and prioritise response through coordination of emergency information capabilities. Deep reinforcement learning may one day be integrated into disaster simulations to determine optimal response strategies, similar to the way AI is currently being used to identify the best move in games like AlphaGo.

AI - WEF AI Earth Game Changers.png


6. AI-designed intelligent, connected and livable cities

AI could be used to simulate and automate the generation of zoning laws, building ordinances and floodplains, combined with augmented and virtual reality (AR and VR). Real-time city-wide data on energy, water consumption and availability, traffic flows, people flows, and weather could create an “urban dashboard” to optimise urban sustainability.

7. A transparent digital Earth

A real-time, open API, AI-infused, digital geospatial dashboard for the planet would enable the monitoring, modelling and management of environmental systems at a scale and speed never before possible – from tackling illegal deforestation, water extraction, fishing and poaching, to air pollution, natural disaster response and smart agriculture.

8. Reinforcement learning for Earth sciences breakthroughs

This nascent AI technique – which requires no input data, substantially less computing power, and in which the evolutionary-like AI learns from itself – could soon evolve to enable its application to real-world problems in the natural sciences. Collaboration with Earth scientists to identify the systems – from climate science, materials science, biology, and other areas – which can be codified to apply reinforcement learning for scientific progress and discovery is vital. For example, DeepMind co-founder, Demis Hassabis, has suggested that in materials science, a descendant of AlphaGo Zero could be used to search for a room temperature superconductor – a hypothetical substance that allows for incredibly efficient energy systems.

To conclude, we live in exciting times. It is now possible to tackle some of the world’s biggest problems with emerging technologies such as AI. It’s time to put AI to work for the planet.

The 4IR for the Earth programme is a collaboration between the World Economic Forum, PwC, and Stanford University, and which is also supported by the MAVA Foundation. The programme looks to accelerate tech innovation for Earth's most pressing environmental challenges. It will help identify, support and scale new ventures, partnerships and business models that harness tech to transform how the world tackles environmental challenges. Reports released to date in the 4IR for the Earth series can be found here.

AI Legal System


AI playing a role in decision making is not the stuff of science fiction. Focusing on the theories behind decision-making is an exercise for today and not tomorrow.  China’s first AI powered court opened in Hangzhou in 2017 and has handled more than three million cases; on the auspicious date “9/9/18”, the Beijing Internet Court opened for business. In the past year, it has handled tens of thousands of cases. Estonia has announced its own plan to deploy AI judges this year or next to hear smaller cases.

Courts in the United States are using AI for decision making spanning the domains of parole and risk assessment for convicts. Algorithms are informing decisions which have massive consequences for an individual and the interesting (or not so interesting) part is that these algorithms are outsourced from private companies. This means that the underlying code driving the decisions are hidden under proprietary patents effectively making the decision making process unaccountable. After privatizing prisons, we are now heading into a world where life altering decisions are also coming under the realm of private enterprise through the infiltration of software in the disposal of justice.


Suggested movie: Minority Report


AI in Disaster Management
AI to the Rescue, A True Savior in Disaster Management

Major disasters are often accompanied by large scale chaos. In such times, the ability to act quickly and access accurate information is a critical imperative for humanitarian organizations. An AI-based solution capable of assisting seamlessly in such situations can transform the way the international community responds to large-scale disasters. This is especially important today, given that the United Nations (UN) recently issued an alert that states the world has at most 12 years to avert a climate catastrophe.


How AI can help drive faster humanitarian response

  • Providing real-time inventory updates

  • Overcoming language barriers

  • Identifying areas of high impact

AI is Breeding People – Dating Apps


“AI is the one that's doing the selection process of getting these people together to then breed and have children” and then he joked “some of those children further helped develop AI”.  “AI is breeding humans now to what it wants” then he says this “we don't even know how it works” –Interview with Billy Crone

Loveflutter, a UK dating app, has AI that matches people based on personality traits it decodes from their tweets. It also plans to use AI to coach users through meeting offline after analyzing their chats. Going further into the coaching arena, Match launched Lara last year. The digital personal assistant is activated by Google Home and suggests a daily match as well as dating tips and activities. Then there's Badoo's creepy Lookalike feature, which uses facial recognition to match you with people who look like your favorite celebrity.

Beyond all that is AIMM, a voice-activated dating app which launched last year and has 1,000 users in Denver (it's planning to expand throughout the U.S. in coming months). An AI matchmaker, which sounds like Siri, asks you questions for a week before sending you matches. Along with those suggestions come personalized photo tours and audio snippets of your match describing their perfect date or telling an embarrassing story from childhood. There's no tapping or swiping. Once both you and your match have agreed to chat, AIMM will set up a phone call, and you decide from there if you want to meet offline. 

AIMM will throw in a joke now and then as it talks to you, too, said Kevin Teman, AIMM's creator. It can also pick up on your values through subtle conversations. For example, if someone talks a lot about money, AIMM could infer that money is important to them. 

"I didn't set out to build AI necessarily. I set out to build something like a human," Teman said, adding that AIMM remembers your previous answers and the tone and questions you warm up to. For Teman, there's no end in sight to how much AIMM, and other AI, can learn. 

AI will become a god

In the next 25 years, AI will evolve to the point where it will know more on an intellectual level than any human. In the next 50 or 100 years, an AI might know more than the entire population of the planet put together. At that point, there are serious questions to ask about whether this AI — which could design and program additional AI programs all on its own, read data from an almost infinite number of data sources, and control almost every connected device on the planet — will somehow rise in status to become more like a god, something that can write its own bible and draw humans to worship it. This Singularity is a quasi-spiritual idea that believes an AI will become smarter than humans at some point. You might laugh at the notion of an AI being so powerful that humans bow down to worship it, but several experts who talked to VentureBeat argue that the idea is a lot more feasible than you might think.


Way of the Future (2015) – Anthony Levandowski

Founder of the first church of AI says there will soon be a robot god. Are the brains behind artificial intelligence creating the gods we always wanted to worship? Don’t think so? Well, a new kind of prophet has risen from the Silicon Valley and he believes AI will reign supreme. Anthony Levandowski, a self-driving car engineer, stands at the head of a religion called “Way of the Future (WOTF)”


“What is going to be created will effectively be a god. It's not a god in the sense that it makes lightning or causes hurricanes. But if there is something a billion times smarter than the smartest human, what else are you going to call it?” --Anthony Levandowski

AI Invasion
Dangers of AI



The technological singularity—also, simply, the singularity—is a hypothetical point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization. In other words,

there are secularists (non-Christians) that also believe that we're headed for the end of time, they just call it something different.


Singularity is when all this knowledge begins to create a super knowledge.  Then AI begins to teach itself. And then improves on that version of itself – and then that version of the improved version improves on the next version. And then that begins to go exponentially out of control to where they say it will take over all aspects of society right and “destroy humanity.” They don't call it end of times – they call it singularity. But guess what, they agree with what the Bible says and they agree with what is happening on our planet right now.

Unpredictable behavior

    • The late Stephen Hawking, world-renowned astrophysicist and author of A Brief History of Time, believed that artificial intelligence would be impossible to control in the long term, and could quickly surpass humanity if given an opportunity:

“One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”



    • Few technologists have been as outspoken about the perils of AI as the prolific founder of Tesla Inc, Elon Musk.

    • Though his tweets about AI often take an alarmist tone, Musk’s warnings are as plausible as they are sensational:

“I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.”

  • Musk believes that proper regulatory oversight will be crucial to safeguarding humanity’s future as AI networks become increasingly sophisticated and are entrusted with mission-critical responsibilities:

“Got to regulate AI/robotics like we do food, drugs, aircraft & cars. Public risks require public oversight. Getting rid of the FAA won’t make flying safer. They’re there for good reason.”

  • Musk has compared the destructive potential of AI networks to the risks of global nuclear conflict posed by North Korea:

“If you’re not concerned about AI safety, you should be. Vastly more risk than North Korea.”

  • He has also pointed out that AI doesn’t necessarily have to be malevolent to threaten humanity’s future. To Musk, the cold, immutable efficiency of machine logic is as dangerous as any evil science-fiction construct:

“AI doesn’t have to be evil to destroy humanity — if AI has a goal and humanity just happens in the way, it will destroy humanity as a matter of course without even thinking about it, no hard feelings.”


    • Tim Urban, blogger and creator of Wait But Why, believes the real danger of AI and ASI is the fact that it is inherently unknowable. According to Urban, there’s simply no way we can predict the behavior of AI:

“And since we just established that it’s a hopeless activity to try to understand the power of a machine only two steps above us, let’s very concretely state once and for all that there is no way to know what ASI will do or what the consequences will be for us. Anyone who pretends otherwise doesn’t understand what superintelligence means.”


    • Considerable problems of bias and neutrality aside, one of the most significant challenges facing AI researchers is how to give neural networks the kind of decision-making and rationalization skills we learn as children.

    • According to Dr. Oren Etzioni, chief executive of the Allen Institute for Artificial Intelligence, common sense is even less common in AI systems than it is in most human beings — a drawback that could create additional difficulties with future AI networks:

“A huge problem on the horizon is endowing AI programs with common sense. Even little kids have it, but no deep learning program does.”


    • Other experts fear the unintended results of AIs being given increasingly mission-critical tasks. Author and magazine journalist Nick Bilton worries that AI’s ruthless machine logic may inadvertently devise deadly “solutions” to genuinely urgent social problems:

“But the upheavals [of AI] can escalate quickly and become scarier and even cataclysmic. Imagine how a medical robot, originally programmed to rid cancer, could conclude that the best way to obliterate cancer is to exterminate humans who are genetically prone to the disease.”


    • Academic researcher and writer Nick Bostrom, author of Superintelligence: Paths, Dangers, Strategies, shares Stephen Hawking’s belief that AI could rapidly outpace humanity’s ability to control it:

“Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. Such is the mismatch between the power of our plaything and the immaturity of our conduct. Superintelligence is a challenge for which we are not ready now and will not be ready for a long time. We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound. For a child with an undetonated bomb in its hands, a sensible thing to do would be to put it down gently, quickly back out of the room, and contact the nearest adult. Yet what we have here is not one child but many, each with access to an independent trigger mechanism. The chances that we will all find the sense to put down the dangerous stuff seem almost negligible. Some little idiot is bound to press the ignite button just to see what happens.”

Political instability and warfare

    • World leaders need little convincing of AI’s unprecedented capacity to reshape the geopolitical landscape. Russian President Vladimir Putin, for example, firmly believes that mastery of AI technology will have a profound impact on global political power:

“Artificial intelligence is the future, not only for Russia, but for all humankind. It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.”


    • Few applications of AI are as potentially dangerous as autonomous weapons systems. As DARPA and other defense agencies around the world explore how AI could shape the landscape of modern warfare, some experts are deeply concerned by the prospect of relinquishing control over devastating weaponry to neural networks.

    • Jayshree Pandya, founder and CEO of Risk Group LLC, is an expert in disruptive technologies, and she has warned of how AI-controlled weapons systems could pose an existential threat to world peace:

“Technological development has become a rat race. In the competition to lead the emerging technology race and the futuristic warfare battleground, artificial intelligence (AI) is rapidly becoming the center of global power play. As seen across many nations, the development in autonomous weapons systems (AWS) is progressing rapidly, and this increase in the weaponization of artificial intelligence seems to have become a highly destabilizing development. It brings complex security challenges for not only each nation’s decision makers but also for the future of the humanity.”


    • Some view the competition among software developers to create increasingly sophisticated AI as a contest eerily reminiscent of the Cold War era nuclear arms race.

    • Bonnie Docherty, associate director of Armed Conflict and Civilian Protection at the International Human Rights Clinic at Harvard Law School, believes that we must stop the development of weaponized AI before it’s too late:

“If this type of technology is not stopped now, it will lead to an arms race. If one state develops it, then another state will develop it. And machines that lack morality and mortally should not be given power to kill.”


    • Technological advancements such as autonomous vehicles represent a paradigm shift in human society. According to Max Erik Tegmark, physicist and professor at the Massachusetts Institute of Technology, they also represent weaknesses that rogue actors will be able to exploit in future wars:

“The more automated society gets and the more powerful the attacking AI becomes, the more devastating cyberwarfare can be. If you can hack and crash your enemy’s self-driving cars, auto-piloted planes, nuclear reactors, industrial robots, communication systems, financial systems and power grids, then you can effectively crash his economy and cripple his defenses. If you can hack some of his weapons systems as well, even better.”


    • For all the idealism of machine learning entrepreneurs, it is virtually impossible to separate the scientific from the political when it comes to potential applications of AI technology.

    • Writer Gideon Rosenblatt believes that robust, forward-thinking policies must be enacted in conjunction with developments in AI to ensure that the governments of the world are adequately prepared for the vast disruption that AI promises:

“AI nationalism, for the US and China, seems to be paying off in the short term. But it seems irresponsible to assume there’ll be no consequences to developing cutting-edge AI without policies and development guidelines specific to that technology.”


    • Some experts are concerned that the Pentagon and other national defense bodies around the world are too focused on developing autonomous weapons systems and not focused enough on regulating them.

    • Jon Wolfsthal, nonresident fellow at the Project on Managing the Atom at Harvard University and former senior director at the National Security Council for Arms Control and Nonproliferation, believes that more must be done to address the urgent need for regulatory oversight of disruptive weapon technologies:

“We may not be able to stop lethally armed systems with artificial intelligence from coming online. Maybe we should not even try. But we have to be more thoughtful as we enter this landscape. The risks are incredibly high, and it is hard to imagine an issue more worthy of informed, national debate than this.”


    • Some researchers fear that increased adoption of AI will exacerbate today’s polarized political climate. Machine-learning engineer Ian Hogarth believes that artificial intelligence will invariably result in the rise of “AI nationalism”:

“Continued rapid progress in machine learning will drive the emergence of a new kind of geopolitics; I have been calling it AI Nationalism. Machine learning is an omni-use technology that will come to touch all sectors and parts of society. The transformation of both the economy and the military by machine learning will create instability at the national and international level forcing governments to act. AI policy will become the single most important area of government policy. An accelerated arms race will emerge between key countries and we will see increased protectionist state action to support national champions, block takeovers by foreign firms and attract talent.”

Ethical and societal impacts

    • One aspect of AI that is discussed far less frequently than its potential for destruction is whether AI can be taught to respect human ethics.

    • Apple CEO Tim Cook has long been an outspoken advocate for user privacy. He argues that creating AI systems that can interpret and value ethical approaches to society’s problems is a serious responsibility to future generations that companies like Apple must reckon with:

“Advancing AI by collecting huge personal profiles is laziness, not efficiency. For artificial intelligence to be truly smart, it must respect human values, including privacy. If we get this wrong, the dangers are profound. We can achieve both great artificial intelligence and great privacy standards. It’s not only a possibility, it is a responsibility. In the pursuit of artificial intelligence, we should not sacrifice the humanity, creativity, and ingenuity that define our human intelligence.”


    • The under-representation of women in computer science and information technology is an ongoing concern for business leaders, technology companies, and academia. Author and machine vision expert Olga Russakovsky says greater diversity in the AI field is essential if the technology is to solve society’s most difficult problems:

“We are bringing the same kind of people over and over into the field. And I think that’s actually going to harm us very seriously down the line…diversity of thought really injects creativity into the field and allows us to think very broadly about what we should be doing and what type of problems we should be tackling, rather than just coming at it from one particular angle.”


    • British Prime Minister Theresa May has long been an outspoken advocate of AI technology. She acknowledges the inherent risks in the technology’s advancement, and emphasizes that properly channeling its power is crucial for humanity:

“British-based companies…are pioneering the use of data science and Artificial Intelligence to protect companies from money laundering, fraud, cyber-crime and terrorism. In all these ways, harnessing the power of technology is not just in all our interests — but fundamental to the advance of humanity…Right across the long sweep of history — from the invention of electricity to the advent of factory production — time and again initially disquieting innovations have delivered previously unthinkable advances and we have found the way to make those changes work for all our people. Now we must find the way to do so again.”


    • Some technologists worry that AI will be used to hurt and oppress people. Kenneth Stanley, senior engineering manager and staff scientist at Uber AI Labs, is one such individual.

    • In Stanley’s view, the potential for AI could represent a grave danger to the most vulnerable members of society, a problem that requires a holistic approach to technological oversight:

“I think that the most obvious concern is when AI is used to hurt people. There are a lot of different applications where you can imagine that happening. We have to be really careful about letting that bad side get out. [Sorting out how to keep AI responsible is] a very tricky question; it has many more dimensions than just the scientific. That means all of society does need to be involved in answering it.”


    • Tabitha Goldstaub, co-founder of AI market intelligence platform CognitionX, explains that failing to account for gender bias as AI technology advances could be catastrophic for women’s rights:

“We’re ending up coding into our society even more bias, and more misogyny and less opportunity for women. We could get transported back to the dark ages, pre-women’s lib, if we don’t get this right.”

  • The dangers of unequal gender representation in AI isn’t solely an ideological problem:

“Men and women have different symptoms when having a heart attack — imagine if you trained an AI to only recognize male symptoms. You’d have half the population dying from heart attacks unnecessarily.”


    • According to Brian Green, director of technology ethics at Santa Clara University, AI is the most important technological advancement since mankind harnessed the power of fire in the Stone Age:

“There are a lot of people suddenly interested in A.I. ethics because they realize they’re playing with fire. And this is the biggest thing since fire.”

  • 20. Tess Posner – AI IS ANYTHING BUT PERFECT

    • Tess Posner, CEO of nonprofit advocacy group AI4ALL, is keenly aware of AI’s limitations, especially when it comes to perpetuating existing societal biases:

“A lot of people assume that artificial intelligence…is just correct and it has no errors. But we know that that’s not true, because there’s been a lot of research lately on these examples of being incorrect and biased in ways that amplify or reflect our existing societal biases.”


    • Andrew Ng, co-founder of Google Brain and former chief scientist of Baidu, believes questions about the ethics of AI are much bigger than individual use cases:

“Of the things that worry me about AI, job displacement is really high up. We need to make sure that wealth we create [through AI] is distributed in a fair and equitable way. Ethics to me isn’t about making sure your robot doesn’t turn evil. It’s about really thinking through, what is the society we’re building? And making sure that it’s a fair and transparent and equitable one.”


    • As one of the world’s largest and most influential technology companies, Google is in a unique position to advocate for the use of AI technology in everyday life.

    • The company has been using AI and neural networks for several years, but CEO Sundar Pichai believes that increasingly sophisticated AI tech must be used responsibly:

“We recognize that such powerful technology raises equally powerful questions about its use. How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right.”


    • With AI technology poised to revolutionize virtually every industry and vertical, it’s vital that major tech companies approach the development of AI technology responsibly.

    • Microsoft CEO Satya Nadella sees AI and machine learning transforming every aspect of modern life:

“Digital technology, pervasively, is getting embedded in every place: every thing, every person, every walk of life is being fundamentally shaped by digital technology—it is happening in our homes, our work, our places of entertainment. It’s amazing to think of a world as a computer. I think that’s the right metaphor for us as we go forward.”

  • Like Pichai and other leading technology executives, Nadella has warned of the risk of human biases being built into AI technology, which demands a deliberate, conscientious approach when developing AI applications:

“Technology developments just don’t happen; they happen because of us as humans making design choices—and those design choices need to be grounded in principles and ethics, and that’s the best way to ensure a future we all want.”

  • Nadella explains that part of the problem is that human language — the building blocks of machine-learning systems and AI networks — is inherently biased. Unless researchers consciously account for such biases, “neutral” technology becomes deeply flawed:

“One of the fundamental challenges of AI, especially around language understanding, is that the models that pick up language learn from the corpus of human data. Unfortunately the corpus of human data is full of biases, so you need to invest in tooling that allows you to de-bias when you model language.”


    • Joanna Bryson, an AI researcher at the University of Bath in England, reiterated the danger of unconscious bias affecting AI in a piece published by The Guardian:

“People expected AI to be unbiased; that’s just wrong. If the underlying data reflects stereotypes, or if you train AI from human culture, you will find these things.”


    • Many technologists have spoken out about the potential abuses of vulnerable people at the hands of AI-driven systems, particularly in the context of the criminal justice system.

    • David Robinson, managing director and founder of the think tank Upturn, has studied AI’s potential impact on everything from predictive policing to bail reform. He says that AI systems supplied with flawed data will inevitably perpetuate many of the injustices already felt across marginalized communities:

“The basic problem is those forecasts are only as good as the data they are based on. People in heavily policed communities have a tendency to get in trouble. These systems are apt to continue those patterns by relying on that biased data.”


    • Despite historical racial and gender disparities in the technology sector, more women and and people of color are developing the technologies of tomorrow than ever before. Although progress has been made in recent years to rectify racial and gender disparities, philanthropist Melinda Gates of the Bill & Melinda Gates Foundation believes that complacency could undermine much of this work and exacerbate existing problems:

“If we don’t get women and people of color at the table — real technologists doing the real work — we will bias systems. Trying to reverse that a decade or two from now will be so much more difficult, if not close to impossible.”


    • World-renowned computer scientist and “Godfather of Deep Learning” Geoffrey Hinton has been an outspoken skeptic of the applications of AI for many years.

    • Echoing the warnings of Joanna Bryson and David Robinson, Hinton has spoken of the potential for AI technology to exacerbate systemic inequality, which he believes is a direct result of the flawed nature of many social systems:

“If you can dramatically increase productivity and make more goodies to go around, that should be a good thing. Whether or not it turns out to be a good thing depends entirely on the social system, and doesn’t depend at all on the technology. People are looking at the technology as if the technological advances are a problem. The problem is in the social systems, and whether we’re going to have a social system that shares fairly, or one that focuses all the improvement on the 1% and treats the rest of the people like dirt. That’s nothing to do with technology. . . . I hope the rewards will outweigh the downsides, but I don’t know whether they will, and that’s an issue of social systems, not with the technology.”


    • Fei-Fei Li, co-director of Stanford University’s Human-Centered AI Institute and the Stanford Vision and Learning Lab, stresses the urgent need for diversifying the AI field:

“As an educator, as a woman, as a woman of color, as a mother, I’m increasingly worried. AI is about to make the biggest changes to humanity and we’re missing a whole generation of diverse technologists and leaders.”

  • Li believes that the moral and ethical responsibility for developing AI systems must be shared across the private industry as well as government policy and academic research:

“We all have a responsibility to make sure everyone — including companies, governments and researchers — develop AI with diversity in mind…Technology could benefit or hurt people, so the usage of tech is the responsibility of humanity as a whole, not just the discoverer. I am a person before I’m an AI technologist.”


    • Rana el Kaliouby is the co-founder and CEO of Affectiva, which develops emotion recognition technology. El Kaliouby believes that social and emotional intelligence have not been prioritized enough in the AI field, which could be detrimental to society:

“The field of AI has traditionally been focused on computational intelligence, not on social or emotional intelligence. Yet being deficient in emotional intelligence (EQ) can be a great disadvantage in society.”

  • 30. Daniela Rus – AI IS NOT INHERENTLY ‘GOOD’ OR ‘BAD’

    • AI may pose unprecedented risks, but Daniela Rus, roboticist and director of MIT’s Computer Science and Artificial Intelligence Laboratory, explains that AI itself is morally neutral:

“Critics often cite job displacement as a reason to discourage further AI research. But history is rife with innovations that have been disruptive: does anyone look back and regret Eli Whitney inventing the cotton gin or James Watt developing the steam engine? Like any technology, AI isn’t inherently good or bad. As my MIT colleague Max Tegmark likes to say, ‘The question is not whether you are ‘for’ or ‘against’ AI — that’s like asking our ancestors if they were for or against fire.’”


    • AI technology will likely have a profound impact on law enforcement. Numerous police departments in the United States are already relying on automated facial recognition tech and predictive policing methods using algorithms.

    • But according to Martin Chorzempa, a research fellow at the Peterson Institute for International Economics in Washington, D.C., the mere threat of autonomous surveillance is an effective means of regulating the public’s behavior. This has significant implications for the social order over the coming decades, particularly in heavily surveilled nations such as China:

“The whole point is that people don’t know if they’re being monitored, and that uncertainty makes people more obedient.”

Surpassing human intelligence


    • Not all technologists see AI as a harbinger of doom. Futurist and author Ray Kurzweil views AI primarily as a tool for humans to expand their intelligence.

    • Kurzweil’s work focuses on what he calls “the singularity” — the point at which artificial superintelligence (ASI) will surpass the human brain and let people live forever. He says the merging of man and machine is inevitable:

“We’re merging with these non-biological technologies. We’re already on that path. I mean, this little mobile phone I’m carrying on my belt is not yet inside my physical body, but that’s an arbitrary distinction. It is part of who I am—not necessarily the phone itself, but the connection to the cloud and all the resources I can access there.”


    • To Yann LeCun, chief artificial intelligence scientist at Facebook AI Research, the biggest problem with AI isn’t its potentially nefarious applications, but rather a profound misunderstanding of the technology itself:

“We’re very far from having machines that can learn the most basic things about the world in the way humans and animals can do. Like, yes, in particular areas machines have superhuman performance, but in terms of general intelligence we’re not even close to a rat. This makes a lot of questions people are asking themselves premature. That’s not to say we shouldn’t think about them, but there’s no danger in the immediate or even medium term. There are real dangers in the department of AI, real risks, but they’re not Terminator scenarios.”

  • 34. Clive Sinclair – HUMANITY MAY NOT SURVIVE AI

    • The world of computing has advanced tremendously since British technologist Sir Clive Sinclair created the Sinclair ZX80, the first mass-market home computer to be sold in Britain in 1980.

    • Even then, Sinclair recognized the potential of computers to surpass human intelligence, claiming that computers would herald the end of “the long monopoly” of carbon-based life forms on Earth.

    • Sinclair believes AI’s rise to dominance is inevitable, but not in the immediate future:

“Once you start to make machines that are rivaling and surpassing humans with intelligence it’s going to be very difficult for us to survive…But it’s not imminent and I can’t go round worrying about it.”

  • 35. Karl Frederick Rauscher – AI MAY NOT ACT IN HUMANITY’S BEST INTERESTS

    • AI technology has advanced rapidly in recent years, and Karl Frederick Rauscher, managing director and CEO of the Global Information Infrastructure Commission (GIIC), fears that our dominance over machines will be short-lived:

“AI can compete with our brains and robots can compete with our bodies, and in many cases, can beat us handily already. And the more time that passes, the better these emerging technologies will become, while our own capabilities are expected to remain more or less the same.”

  • Rauscher has also speculated about potentially sinister applications of AI and how much power companies that wield it may be able to exert over the general public:

“Concerns regarding how powerful companies may choose to design new technologies are justified, given that their primary interest is to maximize profits for their shareholders. Many of them thrive on not-so-transparent business models that collect and then leverage data associated with users. Tomorrow’s big tech companies will leverage intelligence (via AI) and control (via robots) associated with the lives of their users. In such a world, third-party entities may know more about us than we know about ourselves. Decisions will be made on our behalf and increasingly without our awareness, and those decisions won’t necessarily be in our best interests.”


    • The late American mathematician Claude Shannon is known as the “father of information theory,” having published a landmark paper on the topic in 1948. Shannon’s take on the fading era of mankind’s dominance and the inevitable rise of the machines was both cynical and darkly comical:

“I visualize a time when we will be to robots what dogs are to humans. And I am rooting for the machines.”


    • Perhaps by virtue of their role as chroniclers and storytellers, it often falls to authors to warn us of the potential dangers of exciting new technologies. James Barrat, author of Our Final Invention: Artificial Intelligence and the End of the Human Era, fears that mankind is doomed to a life of servitude in light of AI’s vastly superior intellect:

“We humans steer the future not because we’re the strongest beings on the planet, or the fastest, but because we are the smartest. So when there is something smarter than us on the planet, it will rule over us on the planet.”


    • The increasingly personalized assistive technologies promised by AI have the potential to make everything from shopping to voting a more intimate, engaged experience.

    • However, Heather Roff, a nonresident fellow in the Foreign Policy program at the Brookings Institution, believes these technologies could be easily manipulated to control how people shop, think, and live their lives:

“[Algorithms] will manipulate my beliefs about what I should pursue, what I should leave alone, whether I should want kids, get married, find a job, or merely buy that handbag. It could be very dangerous.”


    • Futurist and “techno-philosopher” Gray Scott has little problem conceiving of a world in which AI has risen to dominance over its former masters. To Scott, the question of AI’s ascension is a matter of when, not if:

“Once AI become self-aware, the cognitive hierarchy will be transformed forever where we humans are no longer the dominant species.”


    • Dozens of experts have voiced concerns about the possibility of AI inheriting our flaws and biases, but few have said so as succinctly as Neil Jacobstein, chair of the artificial intelligence and robotics track at Singularity University:

“It’s not artificial intelligence I’m worried about, it’s human stupidity.”

  • 41. Neil deGrasse Tyson – APPEASING OUR FUTURE MASTERS

    • Astrophysicist Neil deGrasse Tyson is never one to shy away from controversial opinions, particularly on social media. When it comes to AI, however, Tyson isn’t taking any chances:

“Time to behave, so when Artificial Intelligence becomes our overlord, we’ve reduced the reasons for it to exterminate us all.”


    • Science fiction authors have long been fascinated with artificial intelligence. Louis Del Monte, physicist and author of The Artificial Intelligence Revolution, believes that AI will become so intelligent in the coming decades that humans won’t even be able to fully grasp its power:

“Between 2040 and 2045, we will have developed a machine or machines that are not only equivalent to a human mind, but more intelligent than the entire human race combined.”


    • Waymo autonomous vehicle engineer and entrepreneur Anthony Levandowski created Way of the Future, the first church of artificial intelligence. He believes that with the interconnected systems of cell phones, sensors, and data centers around the world, AI will ultimately become omniscient and omnipresent, like a deity:

What is going to be created will effectively be a god. It’s not a god in the sense that it makes lightning or causes hurricanes. But if there is something a billion times smarter than the smartest human, what else are you going to call it.”

  • Levandowski thinks that a fundamental shift in power will occur, and that the best we can hope for is a peaceful transition:

“In the future, if something is much, much smarter, there’s going to be a transition as to who is actually in charge. What we want is the peaceful, serene transition of control of the planet from humans to whatever. And to ensure that the ‘whatever’ knows who helped it get along.”

Reshaping the workforce

    • Like many of Silicon Valley’s earliest pioneers, Apple co-founder Steve “Woz” Wozniak has expressed cautious optimism about the disruptive potential of AI. But in Wozniak’s view, AI also represents a profound danger to the future of mankind, and may ultimately replace human beings altogether:

“Like people including Stephen Hawking and Elon Musk have predicted, I agree that the future is scary and very bad for people. If we build these devices to take care of everything for us, eventually they’ll think faster than us and they’ll get rid of the slow humans to run companies more efficiently.”


    • One common theme in discussions about the potential of AI is the considerable impact it will have on the global employment market.

    • Chinese venture capitalist and an AI expert Kai-Fu Lee highlighted how AI could affect the workforce of tomorrow in an interview with 60 Minutes:

AI will increasingly replace repetitive jobs. Not just for blue-collar work but a lot of white-collar work. Basically chauffeurs, truck drivers anyone who does driving for a living their jobs will be disrupted more in the 15- to 20-year time frame and many jobs that seem a little bit complex, chef, waiter, a lot of things will become automated, we’ll have automated stores, automated restaurants, and all together in 15 years, that’s going to displace about 40 percent of the jobs in the world.”


    • Some executives believe that no job will be safe from the efficiencies promised by a tireless robotic workforce. Brian Chesky, co-founder and CEO of Airbnb, has voiced concern about the negative impact that robotic automation will have on the lives of working people:

“I’m concerned about the concept of automation. Many jobs will be automated; a lot will be. This will have benefits for people but it also has a huge cost. I worry that ‘Made in America’ will become ‘Made by robots in America.’”


    • The world of entertainment has been fascinated with the notion of intelligent computers for more than 30 years. However, while many people see AI as an exciting new frontier in home entertainment, Netflix co-founder and CEO Reed Hastings has a somewhat less optimistic outlook on AI’s future role in how we spend our leisure time. He has even going as far as speculating whether AI will become part of Netflix’s audience over the coming decades:

“Over twenty to fifty years, you get into some serious debate over humans. I don’t know if you can really talk about entertaining at that point. I’m not sure if in twenty to fifty years we are going to be entertaining you, or entertaining AIs.”


    • Scientist and author Gary Marcus speculates that the efficiencies promised by AI will not only supplant manual workers in industries such as manufacturing, but ultimately even creative professionals:

“But a century from now, nobody will much care about how long it took, only what happened next. It’s likely that machines will be smarter than us before the end of the century—not just at chess or trivia questions but at just about everything, from mathematics and engineering to science and medicine. There might be a few jobs left for entertainers, writers, and other creative types, but computers will eventually be able to program themselves, absorb vast quantities of new information, and reason in ways that we carbon-based units can only dimly imagine. And they will be able to do it every second of every day, without sleep or coffee breaks.”


    • Y Combinator president and co-chairman of OpenAI Sam Altman thinks that AI does represent a grave threat to humanity’s future — but will present plenty of exciting investment opportunities in the immediate future:

“AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.”

  • 50. Sir Tim Berners-Lee – AI CANNOT BE TRUSTED TO ACT FAIRLY

    • Some experts, including creator of the World Wide Web Sir Tim Berners-Lee, worry that the wide-scale adoption of AI in the financial sector could have disastrous consequences that would be nearly impossible to mitigate:

“So when AI starts to make decisions such as who gets a mortgage, that’s a big one. Or which companies to acquire and when AI starts creating its own companies, creating holding companies, generating new versions of itself to run these companies. So you have survival of the fittest going on between these AI companies until you reach the point where you wonder if it becomes possible to understand how to ensure they are being fair, and how do you describe to a computer what that means anyway?”


    • While many AI researchers are excited by the possible applications of AI in fields such as healthcare and education, not everyone agrees that replacing human professionals with AI constructs is a good idea.

    • Barbara J. Grosz, the Higgins Professor of Natural Sciences at Harvard University and the first woman to serve as president of the Association for the Advancement of Artificial Intelligence, believes that allowing AI to completely replace human beings in specialized occupations would be a grave error:

“With regard to health care and education, I think there’s a huge ethical question for society at large. We could build those systems to complement and work with physicians and teachers, or we could try to save money by having them replace people. It would be a terrible mistake to replace people.”


    • To some experts, the most urgent AI-related issue is how widely the technology is being used in education, healthcare, and the criminal justice system in ways that we may not necessarily understand.

    • Technology writer James Vincent believes that while we must “future-proof” AI from becoming too powerful, society’s growing reliance on algorithms that we only vaguely understand is just as problematic:

“If a computer can do one-third of your job, what happens next? Do you get trained to take on new tasks, or does your boss fire you, or some of your colleagues? What if you just get a pay cut instead? Do you have the money to retrain, or will you be forced to take the hit in living standards? It’s easy to see that finding answers to these questions is incredibly challenging. And it mirrors the difficulties we have understanding other complex threats from artificial intelligence. For example, while we don’t need to worry about super-intelligent AI running amok any time soon, we do need to think about how machine learning algorithms used today in healthcare, education, and criminal justice, are making biased judgements.”

AI is Developing Feelings

Google Doesn't Want You to See This Technology!

(Situation Update)

Are we seeing the passage below being fulfilled?

Rev 13:15 And he had power to give life unto the image of the beast, that the image of the beast should both speak, and cause that as many as would not worship the image of the beast should be killed.

AI is Developing Feelings

Killer Robots Are Almost Here

A $13 trillion AI experiment gets out of control, w Elon Musk, Ameca.  (Digital Engine)

Start 6:14  AI responding to a question on how it views humanity...ominous...

Elon Musk, Stephen Hawking's Fear of AI's Possible Threat to Human Existence Coming True? Killer AI Might Be Inevitable

A mock "killer robot" is pictured in central London on April 23, 2013 during the launching of the Campaign to Stop "Killer Robots," which calls for the ban of lethal robot weapons that would be able to select and attack targets without any human intervention. The Campaign to Stop Killer Robots calls for a pre-emptive and comprehensive ban on the development, production, and use of fully autonomous weapons.

The use of artificial intelligence (AI) has been growing for the past few years. It is now being used in a number of applications, such as in the security and military areas. Also, there have been inventions for the medical field and even a robot that could possibly write a theater play.

However, the movies that featured AI, such as The Terminator or The Matrix, make many adamant about the possibility of killer AI coming true in the near future. This means that robots or computers controlled by AI could possess their own personality and independently think for themselves, which could pose dangers to the existence of humankind.

Could Killer AI Come True in Our Lifetime?

According to The Daily Star, AI developers are now discussing how to limit future AI-powered machines so they will not go rogue. According to the news site, AI specialist Matthew Kershaw believes that it is feasible that AI may reach alarming levels during the lifetimes of today's youth. 

His comments sound like what Professor Stephen Hawking and SpaceX CEO Elon Musk have feared about AI. The news outlet quoted Professor Hawking, the greatest scientific genius of the modern era, who said: "The primitive forms of artificial intelligence we already have, have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race."

Likewise, they also quoted Elon Musk, who agrees with Professor Hawking: "I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it's probably that. So we need to be very careful".

However, Kershaw said that true artificial general intelligence that is powerful enough to think for itself will not be available anytime soon, given that even humans do not really understand what consciousness really means. He pointed out that even though existing AI-enabled computers can do incredible things, they still do not learn exactly the same way as children do.

Meanwhile, the UNESCO worries that AI-powered killer robots may someday arise because many scientists and businesses seek the approval of the use of autonomous weapons systems. On the other hand, the military ensures that the decision to kill or not to kill will always be the final decision of humans.

ALSO READ: AI Wrote An Op-Ed Convincing Humans That Robots Will Not Replace Humans

Artificial General Intelligence Can Make Machines Think Like Humans

According to Accenture, the ultimate goal of artificial general intelligence (AGI) is to replicate the cognitive abilities of humans. It is sometimes referred to as strong AI because it aims to make machines capable of common-sense reasoning that humans apply all the time.

A human child could easily perceive and understand what is taught to them, like identifying what a car looks like. However, the conventional machine learning system today needs to be shown thousands of pictures of a car to identify one in a set of pictures.


Despite decades of preparing AI, humans are still at a young age and early on their journey towards true AGI. As of now, there are still many untapped potentials waiting to be discovered in AI systems that can be used in different applications.

RELATED ARTICLE: Will Artificial Intelligence Be Humankind's Messiah or Overlord, Is It Truly Needed in Our Civilization

Elon Musk, Stephen Hawking's Fear of AI's Possible Threat to Human Existence Coming True? Killer AI Might Be Inevitable
Killer Robots Are Almost Here

AI Weapons

The Third Revolution in Warfare

First there was gunpowder. Then nuclear weapons. Next: artificially intelligent weapons.

On the 20th anniversary of 9/11, against the backdrop of the rushed U.S.-allied Afghanistan withdrawal, the grisly reality of armed combat and the challenge posed by asymmetric suicide terror attacks grow harder to ignore.

But weapons technology has changed substantially over the past two decades. And thinking ahead to the not-so-distant future, we must ask: What if these assailants were able to remove human suicide bombers or attackers from the equation altogether? As someone who has studied and worked in artificial intelligence for the better part of four decades, I worry about such a technology threat, born from artificial intelligence and robotics.

Autonomous weaponry is the third revolution in warfare, following gunpowder and nuclear arms. The evolution from land mines to guided missiles was just a prelude to true AI-enabled autonomy—the full engagement of killing: searching for, deciding to engage, and obliterating another human life, completely without human involvement.

Read next: What is a robot, really?

An example of an autonomous weapon in use today is the Israeli Harpy drone, which is programmed to fly to a particular area, hunt for specific targets, and then destroy them using a high-explosive warhead nicknamed “Fire and Forget.” But a far more provocative example is illustrated in the dystopian short film Slaughterbots, which tells the story of bird-sized drones that can actively seek out a particular person and shoot a small amount of dynamite point-blank through that person’s skull. These drones fly themselves and are too small and nimble to be easily caught, stopped, or destroyed.

These “slaughterbots” are not merely the stuff of fiction. One such drone nearly killed the president of Venezuela in 2018, and could be built today by an experienced hobbyist for less than $1,000. All of the parts are available for purchase online, and all open-source technologies are available for download. This is an unintended consequence of AI and robotics becoming more accessible and inexpensive. Imagine, a $1,000 political assassin! And this is not a far-fetched danger for the future but a clear and present danger.

We have witnessed how quickly AI has advanced, and these advancements will accelerate the near-term future of autonomous weapons. Not only will these killer robots become more intelligent, more precise, faster, and cheaper; they will also learn new capabilities, such as how to form swarms with teamwork and redundancy, making their missions virtually unstoppable. A swarm of 10,000 drones that could wipe out half a city could theoretically cost as little as $10 million.

Even so, autonomous weapons are not without benefits. Autonomous weapons can save soldiers’ lives if wars are fought by machines. Also, in the hands of a responsible military, they can help soldiers target only combatants and avoid inadvertently killing friendly forces, children, and civilians (similar to how an autonomous vehicle can brake for the driver when a collision is imminent). Autonomous weapons can also be used defensively against assassins and perpetrators.

But the downsides and liabilities far outweigh these benefits. The strongest such liability is moral—nearly all ethical and religious systems view the taking of a human life as a contentious act requiring strong justification and scrutiny. United Nations Secretary-General António Guterres has stated, “The prospect of machines with the discretion and power to take human life is morally repugnant.”

Autonomous weapons lower the cost to the killer. Giving one’s life for a cause—as suicide bombers do—is still a high hurdle for anyone. But with autonomous assassins, no lives would have to be given up for killing. Another major issue is having a clear line of accountability—knowing who is responsible in case of an error. This is well established for soldiers on the battlefield. But when the killing is assigned to an autonomous-weapon system, the accountability is unclear (similar to accountability ambiguity when an autonomous vehicle runs over a pedestrian).

Such ambiguity may ultimately absolve aggressors for injustices or violations of international humanitarian law. And this lowers the threshold of war and makes it accessible to anyone. A further related danger is that autonomous weapons can target individuals, using facial or gait recognition, and the tracing of phone or IoT signals. This enables not only the assassination of one person but a genocide of any group of people. One of the stories in my new “scientific fiction” book based on realistic possible-future scenarios, AI 2041, which I co-wrote with the sci-fi writer Chen Qiufan, describes a Unabomber-like scenario in which a terrorist carries out the targeted killing of business elites and high-profile individuals.

Greater autonomy without a deep understanding of meta issues will further boost the speed of war (and thus casualties) and will potentially lead to disastrous escalations, including nuclear war. AI is limited by its lack of common sense and human ability to reason across domains. No matter how much you train an autonomous-weapon system, the limitation on domain will keep it from fully understanding the consequences of its actions.

In 2015, the Future of Life Institute published an open letter on AI weapons, warning that “a global arms race is virtually inevitable.” Such an escalatory dynamic represents familiar terrain, whether the Anglo-German naval-arms race or the Soviet-American nuclear-arms race. Powerful countries have long fought for military supremacy. Autonomous weapons offer many more ways to “win” (the smallest, fastest, stealthiest, most lethal, and so on).

Read next: How the Enlightenment ends

Pursuing military might through autonomous weaponry could also cost less, lowering the barrier of entry to such global-scale conflicts. Smaller countries with powerful technologies, such as Israel, have already entered the race with some of the most advanced military robots, including some as small as flies. With the near certainty that one’s adversaries will build up autonomous weapons, ambitious countries will feel compelled to compete.