Artificial intelligence (AI) has evolved over the past few years from a future idea to a potent technology that is changing our daily lives. AI systems are creating music, driving automobiles, identifying credit card fraud, screening X-rays for fractures, and even “helping” children with their homework.
There are many pressing concerns about how AI technologies may affect jobs, creativity, and our understanding of ourselves as they become more potent and widely used. Barry O’Sullivan, a computer scientist at University College Cork who specializes in AI and ethics, says the hype can send many people down AI apocalypse rabbit holes.
At a recent IPR discussion, O’Sullivan stated, “To be honest, I hope the world would calm down a little bit when it comes to AI.” We will not all be killed by it. It will not require all of our jobs.
IPR gathered information from faculty specialists about how they are utilizing and researching AI, what they are learning, and what the future might hold in order to have a deeper understanding of AI and the changes it will bring about. They emphasize that how we decide to use and govern AI will have a greater impact on its future than the technology itself.

What is AI?
Rob Voigt, an IPR computational linguist, remarked, “Artificial intelligence is a rather weaselly word.” “It can signify many different things to many different people.”
AI can be applied to a wide range of machine learning methods, including popular statistical models like regression. According to Voigt, the difference frequently results from the way a system is utilized. AI usually refers to a machine carrying out duties that are typically completed by humans.
The phrase was first used in a 1955 research proposal by a team of scientists who stated that their goal was “to make robots use language, construct abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.”
Voigt and his colleagues at Northwestern’s Linguistic Mechanisms Lab now employ robots to find patterns in massive volumes of audio and text, such as 911 calls and police encounters.
These studies would take a lot longer with just human labor since researchers would need to manually annotate each discussion after reading or listening to it.
He said, “There is too much data for a human being to look at every example that we want someone to look at.”
Although Voigt’s research team has taught algorithms to detect respect in human interactions, such as when someone is addressed as “sir” instead than “dude,” it is still difficult to identify more nuanced conversational cues.
He stated, “We can train an artificial intelligence to do anything that you can fathom asking a human to accomplish.” “How well is it possible to make that work?” is the crucial question.
ALSO READ : HOW TO START A BUSINESS OF PETROL PUMP
Just How Intelligent is AI?
Although ChatGPT and other AI tools present themselves as amiable, eager-to-please assistants, can they truly “think” in the same way as humans?
According to Jessica Hullman, an IPR computer scientist who researches how AI may assist human decision-making, AI systems carry the biases and shortcomings of their developers. Additionally, these AI technologies are strengthened and weakened by the way the models are modified to reflect human preferences after training.
According to her, “the models start to get quite good at designing things that humans appreciate.”
Hullman clarified, “It makes them more compelling.” “They get better at things like apologizing for their ignorance—but they also get better at things like seeming authoritative, because people enjoy when things sound more authoritative.”
Human choices, such as choosing training data and establishing development priorities, are crucial even with cutting-edge technologies like generative AI.
Hatim Rahman, an IPR associate and management expert, stated that “AI models do not think like people.” “They are providing statistically likely findings, which are frequently coherent, but in the end, it will be up to us to decide whether its output is clever.”
Will AI Take Our Jobs?
Rahman thinks AI will have a gradual impact on employment markets. He stated, “We are not likely to witness mass layoffs or mass advances in productivity either.”
The “innovation fallacy,” which holds that significant technological developments inevitably lead to profound societal change, is reflected in much of the conversation surrounding AI.
Technology’s potential seldom foretells how it will affect workers and the labor market. Rather, it is organizational, cultural, and societal aspects,” Rahman stated.
Despite the rapid innovation of AI developers, many organizations are sluggish to implement AI tools due to issues such as data security.
Rahman claims that “occupational power”—that is, employees’ ability to influence how technology and other changes are applied in their jobs—is a crucial factor to take into account as AI’s effects on the workplace develop. AI has mostly been kept out of courtrooms by lawyers, who are shielded by bar association regulations, but customer service representatives are more likely to be disrupted by automation.
AI has been incorporated into certain professions in ways that improve productivity while preserving or even raising job quality. Rahman, for example, notes that although commercial flight automation technology has been around for decades, self-flying aircraft are still a ways off. Rather, both safety and pilots’ pay have grown.
Although it is unclear who would profit, AI advancement may potentially open up new possibilities.
Who will be hired for those positions? It is likely to replicate some of the disparity we have seen with previous technology advancements if the majority of the population has a four-year college degree, according to Rahman.
Programs that teach new skills can assist employees in changing careers when AI reorganizes roles—as long as employers adjust by acknowledging these unconventional types of training.
Computer scientist and IPR associate V.S. Subrahmanian contends that employees who successfully incorporate AI into their work will have a competitive advantage.
“Those who can use AI to perform their jobs much more effectively than they can now will succeed in this new era,” he stated.
Subrahmanian went on, “You are not fighting AI—you are fighting others who might be able to harness AI faster than you.” “You must acknowledge that new things are constantly occurring. I have to constantly adapt and reinvent myself.
How Is AI Changing Us?
Even though AI can greatly enhance our talents, its widespread application may eventually erode our knowledge and originality. According to Voigt, although the consequences of AI-generated content could appear little at first, they could have a significant long-term impact on language diversity.
Machine-generated language exposure may “change how people actually talk on a day-to-day basis.” We may be approaching a time when a significant portion of the content you see online is created by models, according to Voigt. Some individuals even claim that this may already be the case.
Hullman expresses similar worries: “This uniformity of information is a major issue.” At what point does an expert’s own domain expertise begin to decline when they are largely dependent on models?
Developers must carefully assess how AI technologies portray ambiguity in order to avoid over-reliance on AI and encourage sensible decision making. One strategy is to create workflows where AI offers multiple options instead of just one, necessitating more thoughtful decision-making on the part of the human user.
Experts such as judges and physicians were shown by Hullman and her colleagues how different AI models, all with the same accuracy rates, frequently yield different solutions to the same issue.
People can better understand what machine learning is truly doing thanks to that. Seldom is there just one answer, she stated.
How Do We Harness AI’s Benefits While Mitigating Its Risks?
Subrahmanian has been working on AI for national security applications since the 1980s, such as predicting cyberattacks and disinformation. He concurs with Hullman that AI literacy is crucial and suggests Finland’s strategy for thwarting misinformation as a possible template for anticipating AI’s dangers.
He clarified, “Finland perceived a threat coming at them [from Russia] and proactively took steps before they were targeted.” “They adopted government laws that began training kids to question what they read on media and social platforms as early as elementary school.”
According to O’Sullivan, there is a lot of difference around the world when it comes to governing AI technology. He has noted that whereas regulations frequently employ similar terminology, there may be substantial differences in their underlying values and interpretations.
For instance, the European Union has spearheaded the push for strict regulation, emphasizing “trustworthy AI,” which needs to be robust, ethical, and compliant. Although they appear to be comparable, China’s AI policies differ in practice when it comes to important matters like individual privacy.
Rahman points out that the United States has traditionally used lax regulations to promote innovation.
With OpenAI, you can observe that. Rahman stated, “When they created ChatGPT, they obviously were not that concerned about breaching copyright.” “They were prepared to seek for forgiveness later or take their chances.”
Rahman notes that current laws, such antidiscrimination legislation, can be modified to address AI-related challenges, even though he is not enthusiastic about new federal AI regulations in the near future. Some states, including Illinois, have begun passing AI-specific regulations, particularly in employment and facial recognition.
Hullman stated, “The technology is outpacing our ability to control it,” pointing to a regulatory error in the EU’s General Data Protection Regulation’s need that AI predictions be explicable.
Hullman went on, “We are not at a place where we can necessarily offer an explanation for a model forecast and be sure it is truly right.” “There is a significant risk of putting regulations in place that we can not actually back up with the underlying methodology because the models themselves are so complex.”
Subrahmanian claims that if legislators, legal professionals, engineers, and other stakeholders collaborate to create fair legislation, we can securely harness AI’s power in the future.
“There are a lot of people who are talking about potential abuses, without really knowing much about it. However, there are not many individuals discussing options, he noted. “To fix it, we need to bring a multidisciplinary team of people.”
Experts are hopeful about the advancements in knowledge that AI can bring, despite reservations.
Voigt declared, “We are in an exciting time.” Imagine having an army of 10,000 research assistants who are capable of making any sophisticated human decisions regarding certain facts. What questions would you be able to ask that you would not be able to ask otherwise?
Jessica Hullman is an IPR fellow and the Ginni Rometty Professor of Computer Science. Hatim Rahman is an IPR associate, an associate professor of management and organizations and sociology (by courtesy), and the PepsiCo Chair in International Management. Walter P. Murphy Professor V.S. Subrahmanian is aHullman stated, “The technology is outpacing our ability to control it,” pointing to a regulatory error in the EU’s General Data Protection Regulation’s need that AI predictions be explicable.
Hullman went on, “We are not at a place where we can necessarily offer an explanation for a model forecast and be sure it is truly right.” “There is a significant risk of putting regulations in place that we can not actually back up with the underlying methodology because the models themselves are so complex.”
Subrahmanian claims that if legislators, legal professionals, engineers, and other stakeholders collaborate to create fair legislation, we can securely harness AI’s power in the future.
“There are a lot of people who are talking about potential abuses, without really knowing much about it. However, there are not many individuals discussing options, he noted. “To fix it, we need to bring a multidisciplinary team of people.”
Experts are hopeful about the advancements in knowledge that AI can bring, despite reservations.
Voigt declared, “We are in an exciting time.” Imagine having an army of 10,000 research assistants who are capable of making any sophisticated human decisions regarding certain facts. What questions would you be able to ask that you would not be able to ask otherwise?
Jessica Hullman is an IPR fellow and the Ginni Rometty Professor of Computer Science. Hatim Rahman is an IPR associate, an associate professor of management and organizations and sociology (by courtesy), and the PepsiCo Chair in International Management. Walter P. Murphy Professor V.S. Subrahmanian is aHullman stated, “The technology is outpacing our ability to control it,” pointing to a regulatory error in the EU’s General Data Protection Regulation’s need that AI predictions be explicable.
Hullman went on, “We are not at a place where we can necessarily offer an explanation for a model forecast and be sure it is truly right.” “There is a significant risk of putting regulations in place that we can not actually back up with the underlying methodology because the models themselves are so complex.”
Subrahmanian claims that if legislators, legal professionals, engineers, and other stakeholders collaborate to create fair legislation, we can securely harness AI’s power in the future.
“There are a lot of people who are talking about potential abuses, without really knowing much about it. However, there are not many individuals discussing options, he noted. “To fix it, we need to bring a multidisciplinary team of people.”
Experts are hopeful about the advancements in knowledge that AI can bring, despite reservations.
Voigt declared, “We are in an exciting time.” Imagine having an army of 10,000 research assistants who are capable of making any sophisticated human decisions regarding certain facts. What questions would you be able to ask that you would not be able to ask otherwise?
Jessica Hullman is an IPR fellow and the Ginni Rometty Professor of Computer Science. Hatim Rahman is an IPR associate, an associate professor of management and organizations and sociology (by courtesy), and the PepsiCo Chair in International Management. Walter P. Murphy Professor V.S. Subrahmanian is aHullman stated, “The technology is outpacing our ability to control it,” pointing to a regulatory error in the EU’s General Data Protection Regulation’s need that AI predictions be explicable.
Hullman went on, “We are not at a place where we can necessarily offer an explanation for a model forecast and be sure it is truly right.” “There is a significant risk of putting regulations in place that we can not actually back up with the underlying methodology because the models themselves are so complex.”
Subrahmanian claims that if legislators, legal professionals, engineers, and other stakeholders collaborate to create fair legislation, we can securely harness AI’s power in the future.
“There are a lot of people who are talking about potential abuses, without really knowing much about it. However, there are not many individuals discussing options, he noted. “To fix it, we need to bring a multidisciplinary team of people.”
Experts are hopeful about the advancements in knowledge that AI can bring, despite reservations.
Voigt declared, “We are in an exciting time.” Imagine having an army of 10,000 research assistants who are capable of making any sophisticated human decisions regarding certain facts. What questions would you be able to ask that you would not be able to ask otherwise?
Jessica Hullman is an IPR fellow and the Ginni Rometty Professor of Computer Science. Hatim Rahman is an IPR associate, an associate professor of management and organizations and sociology (by courtesy), and the PepsiCo Chair in International Management. Walter P. Murphy Professor V.S. Subrahmanian is the Walter P. Murphy Professor of Computer Science and an IPR associate. Rob Voigt is assistant professor of linguistics and an IPR fellow.

