
The AI Revolution: Real-World Use Cases, Risks, and What’s Next (Part 2)
In Part 2 of our interview with Jonas Bjerg, we shift from the individual to the institutional—exploring how generative AI is transforming entire industries, reshaping workforce skills, and raising complex ethical and regulatory challenges. From real-world applications in healthcare and marketing to the importance of human oversight and thoughtful policy, Jonas offers a grounded perspective on what it takes to adopt AI responsibly and where it’s all heading.
You can get up to speed here if you missed Part 1 of this article.
Supertrends: What are the biggest challenges companies face when implementing generative AI technologies?
Jonas Bjerg: When companies try to implement generative AI, they often run into challenges that go beyond just installing the software. One big issue is ensuring the quality and accuracy of AI-generated outputs. These models, as impressive as they are, can sometimes behave in unpredictable ways—they can “hallucinate,” meaning they might produce information that is completely incorrect or fabricated. If an organization is using AI to draft content or answer customer queries, they need checks in place so that errors or false statements don’t slip through. This is not trivial; it requires setting up review processes or using additional tools to fact-check AI work. The risk of inaccuracy is one of the most recognized concerns companies have with generative AI. No business wants to put out content or decisions based on flawed AI output, so handling the unpredictability is a key challenge.
Another major hurdle is related to data privacy and security. Generative AI systems often require large amounts of data, and sometimes that data is sensitive (like proprietary company documents or user information). Companies rightfully worry about feeding confidential data into third-party AI platforms. There’s also the flip side: AI might inadvertently expose pieces of its training data in responses, potentially leading to intellectual property issues.
Businesses have to navigate how to use these tools while protecting customer privacy and respecting IP laws. In practice, this might involve using on-premises models or ones where the data isn’t sent to an external server, or scrambling data so it’s anonymized. It’s a new domain of risk management.
Security concerns are another major consideration: AI can be used by bad actors too. For example, generative AI can create very convincing phishing emails or deepfake images. So, as companies adopt the tech, they must also brace for new security challenges and train their staff accordingly (imagine an employee receiving an AI-generated voicemail that mimics their CEO’s voice—will they be prepared to spot the fraud?). We have already seen examples of bad actors who used prompt engineering to trick a chatbot into selling cars cheaper than the real price. But of course, there are endless nefarious ways to misuse any technology, so companies just need to be careful when they implement new tools.
Integration and change management issues also loom in the shadows a lot. Implementing generative AI isn’t like flipping a switch; it often requires rethinking business processes. Employees might resist using the new tools either out of fear (like worrying the AI will make their role redundant) or simply because of the learning curve. Companies face the task of training their workforce to use AI effectively and responsibly. This might mean reorganizing workflows to incorporate AI in a meaningful way. This could mean defining new roles or responsibilities (e.g., who 'owns' the output of an AI system? Who validates it?). Additionally, there’s often a challenge of managing expectations—especially when dealing with the hype.
Executives might expect instant massive ROI from generative AI due to all the buzz. The reality is that getting value from AI requires the right use cases and often some iteration. Ensuring a realistic understanding upfront (AI is powerful but not magical) helps avoid disappointment down the road when the first pilot project encounters snags.
Supertrends: Are there any specific examples where generative AI has been used in unexpected or groundbreaking ways?
Jonas Bjerg: The most recent example for me is how the researchers I mentioned earlier have started using generative models to design new drug molecules and treatments—things that humans might not have come up with alone. In one case, a generative AI system was used to develop a potential immunotherapy treatment for cancer.
Think about that: we often expect AI to write poems or code, but here it’s contributing ideas in drug discovery, suggesting novel chemical structures that could become life-saving medicines. Just the fact that this speeds up potential testing phases by several months is enough to get me excited about what else this use-case can bring to humanity.
Another unexpected area is how generative AI has proven adept at tasks once thought to be exclusive to human experts. For instance, advanced language models have passed professional exams in fields like law and medicine, or outperformed humans in certain knowledge contests.
Not long ago, it would have sounded absurd that an AI could draft legal arguments or diagnose medical cases from exam questions. Yet here we are—AI models have scored respectably on the bar exam and medical license exams (under controlled conditions). While these are just tests, it’s groundbreaking in that it showcases how far AI’s language and reasoning abilities have come. It’s prompting universities and certifying bodies to rethink how to assess knowledge in an age where AI can be a participant.
People have also used generative AI to create entire short films and a synthetic voice of a deceased artist singing a new song. There was even a case of an AI-generated artwork winning a digital art competition at a state fair—which really shocked the art community and ignited debate about what constitutes art and authorship.
Supertrends: How should society address the ethical dilemmas posed by advanced generative AI systems?
Jonas Bjerg: The rise of advanced generative AI brings along some big ethical dilemmas, and addressing them will require a concerted effort from all of society. One major issue is the potential for misinformation and manipulation. AI can create very convincing fake images, videos, or news stories.
Society will need to get proactive about this—for instance, by developing better ways to authenticate content. This could involve technical solutions like digital watermarks that indicate something is AI-generated, which several tech companies are now working on, and also public education so people are savvy about not believing everything they see online.
I think an “AI literacy” for the general public is crucial: teaching people basic skills to discern AI fakes or be critical of content. In the same vein, social media platforms and news outlets have a responsibility to implement safeguards against the spread of AI-generated misinformation (maybe similar to how they handle spam or bot activity).
Another ethical challenge is bias and fairness. Generative AI models learn from historical data, and unfortunately, that data can carry societal biases. If we’re not careful, AI systems could amplify stereotypes or unfair representations—whether it’s in images or in text. Society should demand and encourage that AI developers proactively address this.
That means more diverse training data, rigorous bias testing, and transparency about how models are built. We might also see the establishment of independent audit bodies that can evaluate AI systems for bias and other ethical issues. The goal is to ensure AI tools are aligned with human values and don’t harm disadvantaged groups by default.
In short, just because the technology can generate something doesn’t mean it should—we need norms and possibly regulations to define those boundaries.
Speaking of regulation, policies and guidelines are going to play a key role. We’re at a point where even AI creators are asking for guardrails. Clear ethical principles and possibly new laws will help set the boundaries. For example, data privacy laws might need updates, and intellectual property laws are scrambling to catch up to questions like “who owns AI-generated content?”
Governments and industry groups are starting to outline these—the EU’s AI Act is one prominent example, aiming to impose requirements on higher-risk AI systems. I believe society should take a balanced approach: protect people’s rights and safety, but also allow room for innovation.
That could involve things like requiring transparency (AI systems should disclose when you’re interacting with an AI versus a human), setting standards for AI in critical sectors (e.g., if AI is used in healthcare, it must meet certain validation criteria), and creating mechanisms for accountability (if an AI causes harm, how do we address that?).
Finally, an ethical dilemma we must address is the impact on employment and society at large. Advanced AI will disrupt jobs—that’s almost certain. Society needs to respond by investing in education and retraining programs so that people can pivot into new roles that AI might create or into roles that AI can’t do. Ethically, we should strive to share the benefits of AI widely, not just have a small group reap rewards while others face job losses.
That’s a societal choice: through policy, through how businesses implement AI (augmenting workers rather than just replacing them), and through social safety nets. Overall, tackling generative AI’s ethical dilemmas is a team sport. It involves tech developers building safer systems, governments setting rules, and educators and media raising awareness.
It’s a big challenge, but if we face it head-on with collaboration, I’m optimistic we can harness generative AI ethically and beneficially.
Supertrends: In your opinion, what role should policymakers play in regulating AI technologies?
Jonas Bjerg: Policymakers play a critical role in ensuring AI technologies develop in a way that’s beneficial and safe for society. In my view, they should act as both enablers of innovation and guardians of public interest.
On one hand, policymakers need to set clear guidelines and regulations that address the risks of AI. This might include creating standards for AI system testing (to ensure they’re safe and effective before deployment in, say, healthcare or transportation), requiring transparency about when AI is used (so consumers know if they’re talking to a bot or a human), and enforcing privacy protections (making sure companies don’t misuse personal data to train their AIs without consent).
The EU has over-regulated data in general to such an extent that there are almost no serious contenders left in Europe. Macron did a great speech on this a few months back. The EU is no longer part of the conversation, which also means the EU doesn’t get to shape AI anymore. We are now at the mercy of the US and Asia. This should never have been the reality, but we need to intervene ASAP if we want any part in defining how AI is used and developed in the future globally. The EU has a long history of over-regulating, and I’m sad to see that the trend is continuing.
Policymakers must be careful not to stifle innovation. AI is moving fast, and overly heavy-handed or misinformed regulation will slow beneficial developments or push research to less regulated regions. So, they should engage closely with technologists, ethicists, and industry when crafting rules—staying up-to-date on the tech’s capabilities and limitations.
A good approach I see is risk-based regulation: focus on the uses of AI that are high-stakes (like using AI in mortgage approvals or in self-driving cars) and have stricter standards there, while perhaps having a lighter touch on low-risk applications (like AI that generates funny memes). This is the approach the EU AI Act says it is taking, for instance, classifying AI by risk categories, but with the additional data regulation the damage has already been done, in my opinion.
I don’t see the EU returning as a meaningful player in AI anytime soon. Policymakers also have a role in fostering the positive potential of AI. That includes funding education and retraining programs so the workforce can upgrade their skills for an AI-driven economy. It means encouraging research in areas like AI safety, interpretability, and energy efficiency (since training big AI models can consume a lot of power)—perhaps via grants or public-private partnerships.
I also think international coordination is key: AI is a global phenomenon, so countries should communicate and perhaps establish some common frameworks (just as there are international agreements for things like nuclear technology, we might see something similar for AI in the long run to prevent misuse and ensure fairness globally).
In summary, policymakers should act as the referees in this fast-paced AI game—setting the rules that keep it fair and safe, calling out fouls when needed—but also as coaches: encouraging progress and helping society adapt to the new strategies this technology enables. Striking that balance is no easy task, but it’s absolutely essential as AI becomes more ingrained in every facet of life.
Supertrends: What is your vision for the role of generative AI in five to ten years?
Jonas Bjerg: Peering five to ten years into the future, I envision generative AI becoming an everyday co-pilot in our lives and work. Just as the past decade saw smartphones and the internet become ubiquitous, the next will likely normalize AI as an ever-present assistant. In my book, I call this the final platform shift.
We’ll have more sophisticated versions of today’s AI chatbots integrated into virtually every device—from our productivity software at work to our appliances at home. These systems will be more multimodal, meaning they won’t just handle text; they’ll process images, video, voice, and more seamlessly in one place. You might have an AI that can draft a report, create accompanying slides with graphics, and even generate a short video summary of the findings, all in one workflow.
In terms of communication, interacting with AI will feel more natural—closer to talking to a knowledgeable colleague than using a tool. I expect significant improvements in an AI’s ability to maintain context over long conversations, remember user preferences, and tailor its style or approach to whoever it’s assisting.
Generative AI will also likely spur new industries and economic growth in a big way. Analysts already estimate that generative AI could contribute on the order of trillions of dollars to the global economy in the coming years. In 5-10 years, we’ll see that materialize through a boom in AI-augmented services and products.
For instance, entertainment might have personalized AI-generated content—imagine custom-tailored video games or interactive stories generated on the fly based on your interests. E-commerce might use AI to let you “try on” clothes in a virtual fitting room with photorealistic avatars. Education could be transformed by AI tutors that adapt dynamically to each student’s learning style and pace. Importantly, many of these advances could make services more accessible and affordable (because AI automation can lower costs), which is a positive economic force if managed well.
In the workplace, generative AI might fundamentally reshape job roles—hopefully in a way that frees people from drudgery and allows more focus on creative and meaningful work. I like to think that in a decade, saying you have an “AI assistant” at work will be as common as saying you have an email account today.
Everyone will have some form of AI helping them — whether you're a doctor using AI to summarize patient notes and medical literature before seeing a patient, or a construction project manager relying on AI to automatically monitor progress via drone footage and flag delays.
The nature of collaboration could change: you’ll have human teams plus AI team members (like a virtual data analyst sitting in on a meeting, ready to pull up insights when asked).
We should also mention the quality and reliability of generative AI will improve by then. The issues of today—like frequent inaccuracies or bizarre outputs—will be mitigated by better training techniques, more refined models, and perhaps hybrid systems that integrate rules or factual databases with the generative flexibility.
We’ll likely solve a lot of the current limitations, making AI outputs more trustworthy and easier to control or explain. That’s not to say everything will be rosy—new challenges will emerge (they always do). But I foresee a future where generative AI is just woven into the fabric of how we do things. It’s almost invisible in its ubiquity, yet incredibly powerful—kind of like electricity. And just as electricity revolutionized industries a century ago, generative AI will be a general-purpose capability that touches everything from how we create art and culture to how we invent, work, and play.
The key will be continuing to shape its use in line with our values, so that in five or ten years we look around and see AI as a well-managed, beneficial companion in our daily lives.
Waymo and other car companies already have fully autonomous vehicles driving around in places like San Francisco and Austin, Texas. I have driven in them; they are perfect, and you can order them straight to your location in minutes, just like Uber. Yet it will take years before we see them in Europe (hence the regulation rant earlier). In 5-10 years, I do believe we will see them on the roads in Europe too, though.
Supertrends: What key skills should professionals develop to thrive in an AI-driven world?
Jonas Bjerg: To thrive in an AI-driven world, professionals (especially those early in their careers) should focus on developing a combination of technical literacy and timeless human skills.
On the technical side, you don’t necessarily need to be a programmer or data scientist, but you should become AI literate. That means understanding at a high level how AI works, its strengths and weaknesses, and knowing how to effectively use AI tools relevant to your field.
For example, if you’re a marketer, learn how to prompt AI to draft social media posts and also how to fine-tune the tone. If you’re a teacher, learn how AI can help generate lesson plans or personalized quizzes. In essence, become comfortable collaborating with AI as a tool. This skill might be called “AI proficiency”—similar to how being proficient with the internet or Office software became essential; now it’s about being adept at integrating AI into your workflow.
Equally (if not more) important are the uniquely human skills that AI can’t easily replicate. These include critical thinking, creativity, emotional intelligence, collaboration, and adaptability. One education expert referred to these as “Power Skills” for the AI era.
The idea is that as AI handles more routine tasks, skills like creativity (coming up with original ideas and approaches), critical thinking (making judgments, solving novel problems), and collaboration (working effectively in teams, communicating, leading) become the real differentiators. They actually make all your other skills more effective.
For instance, an AI might give you five possible solutions to a problem—you need the critical judgment to pick the best one and the creativity to perhaps tweak it into a truly innovative solution. Emotional intelligence and empathy are also key; roles that involve understanding people (be it clients, students, patients, colleagues) will always value the human touch. If anything, AI’s presence makes human empathy more precious, because no matter how advanced AI gets, people will crave genuine human connection and understanding.
Another crucial skill is the ability to continually learn. The landscape of AI tools and capabilities is changing fast. What gives you an edge today might be commonplace in a couple of years, and new tools will emerge. Adopting a mindset of lifelong learning—being curious and willing to update your skills regularly—will serve you well. This could mean everything from formally taking courses to just being self-driven to play around with new tech and teach yourself. Adaptability is a big part of this; those who can quickly pick up new systems or adjust to new workflows will navigate the AI-driven workplace more smoothly.
Also, I encourage developing your understanding of the data side of things. AI runs on data, so having comfort with data analysis and interpretation helps. You don’t need to become a statistician, but know the basics: how to read a chart, how to spot something off in data, and how data informs AI outputs.
This goes hand in hand with critical thinking—don’t take every AI output at face value; look at the evidence behind it if available.
Supertrends: What new career opportunities do you see emerging as a result of advancements in generative AI?
Jonas Bjerg: Advancements in generative AI aren’t just changing existing jobs—they’re also creating entirely new career paths. One of the most prominent emerging roles is that of the AI Ethicist or AI Ethics Officer. As organizations deploy AI, they want someone who can foresee and navigate the ethical pitfalls—bias, fairness, compliance with regulations, etc. AI ethicists help develop guidelines for responsible AI use and often work cross-functionally to review AI projects from an ethical standpoint.
Similarly, roles like AI Policy Specialist or AI Lawyer are popping up, as companies need to interpret new AI laws and ensure they’re in compliance, or draft policies for AI use internally.
We’re also seeing roles focused on the care and feeding of AI systems. For example, Model Trainers or AI Trainers—individuals who curate training data and fine-tune AI models for specific tasks. There’s also the Model Ops/MLOps Engineer, which is akin to DevOps but for machine learning models: making sure AI models are deployed reliably, monitored, and updated as needed.
A Model Validator is another emerging title, where one’s job is to rigorously test AI models, poke holes in them, and ensure they meet quality criteria before they’re rolled out.
One area I find intriguing is the rise of roles that combine domain expertise with AI. For example, Generative AI Specialist in Design—someone who knows design deeply and also knows how to use generative tools to create prototypes or artworks. Or AI in Healthcare Lead—perhaps a clinician who also guides the integration of AI into a hospital’s processes. These aren’t entirely new jobs created out of thin air, but hybrid roles where AI knowledge is a core part of the job description now.
We can’t forget AI Maintenance and Oversight roles. Think of titles like AI Quality Analyst or AI Auditor – people who regularly check AI systems for errors, biases, or performance drift. There’s talk that just as financial auditors became crucial after financial tech got complex, AI auditors might become standard as AI gets embedded in critical operations. And given the concerns about things like deepfakes, I wouldn’t be surprised to see roles like Content Authenticity Analyst who focuses on detecting AI-generated fake content.
In essence, as generative AI integrates into various sectors, new specialties are emerging at that intersection of AI and domain knowledge. Many of these roles center around developing, managing, or improving AI systems, and ensuring they’re used responsibly. For early-career folks, this is good news – it means there are fresh, exciting paths to explore that didn’t exist a few years ago.
Whether you’re inclined towards the technical side (like becoming an AI model expert) or the strategic side (like shaping a company’s AI ethics or strategy), the landscape of AI careers is expanding. The key is to remain adaptable and maybe even help invent your own role, because the “AI + X” combo (where X is your domain) is going to yield some very interesting job descriptions in the near future.