Darko Matovski, CEO and co-founder of CausaLens, thinks regulation is necessary From masters of the digital universe to pariah figures selling a machine-heavy dystopia. Maybe that's not exactly the journey AI developers have taken, but the debate about the benefits and risks associated with AI tools has intensified over the past few months, fueled in part by the arrival of Chat GPT on our desktops. Against this backdrop, the UK government has published plans to regulate the sector. So what does this mean for startups? In presenting proposals for a regulatory framework, the government promised a light touch, innovation-friendly approach, while also addressing public concerns. And startups in the industry were probably relieved to hear that the government was talking about opportunities instead of emphasizing risks. As Minister for Science, Innovation and Technology, Michelle Donelan cited the published proposals "AI is already delivering tremendous social and economic benefits for real people, from improving NHS medical care to making transport safer. Recent advances in topics such as productive artificial intelligence will set us up for the foreseeable future." It gives us a glimpse of the enormous opportunities that lie ahead.” So, recognizing the need to help the UK's AI startups, which collectively attracted more than $4.65 billion in VC investments last year, the government avoided doing anything too radical. There will be no new editor. Instead, communications watchdog Ofcom and the Competition and Markets Authority will share the heavy burden. And oversight will be based on broad principles of security, transparency, accountability and governance, and access to compensation, rather than being overly prescriptive. The Smorgasbord of Artificial Intelligence Risks However, the government has identified a smorgasbord of potential disadvantages. These included human rights, justice, public safety, social cohesion, privacy and security risks. For example, generative AI can threaten jobs, create problems for educators, or produce s that blur the lines between fiction and reality. Decision-making AI, widely used by banks to evaluate loan applications and identify potential , has already been criticized for producing results that simply reflect current industry biases, thereby providing some form of validation for injustice. Then, of course, there is artificial intelligence to support self-driving cars or autonomous weapon systems. The kind of software that makes life-or-death decisions. That's too much for regulators to confuse. If they get it wrong, they can either hinder innovation or fail to properly solve real problems. So what will it mean for startups working in this industry? Last week, I spoke with Darko Matovski, CEO and co-founder of CausaLens, a provider of AI-driven decision-making tools. Need for Regulation “Regulation is a must,” he says. "Any system that could affect people's livelihoods must be regulated." However, he acknowledges that this will not be easy given the complexity of the software offered and the diversity of technologies in the industry. Matovski's own company, CausaLens, provides artificial intelligence solutions to aid decision making. The startup, which raised $45 million from VCs last year, has so far sold its products to markets such as financial services, manufacturing and healthcare. Use cases include price optimization, supply chain optimization, risk management in the financial services industry, and market modeling. On the face of it, decision making software shouldn't be controversial. Data is collected, processed and analyzed to enable companies to make better and automated choices. But of course it's controversial because of the inherent danger of bias when software is "trained" to make these choices. The challenge, according to Matovski, is to create software that removes bias. “We wanted to create artificial intelligence that people can trust,” he says. To do this, the company's approach has been to create a solution that actively monitors cause and effect on an ongoing basis. This allows software to adapt to how an environment – ​​for example, a complex supply chain – reacts to events or changes, and this is factored into the decision making process. The idea of ​​decision making is given in real time based on what actually happens. The bigger point is that perhaps beginners should consider addressing the risks associated with their particular flavor of AI. keep pace with But the question is. With dozens or perhaps hundreds of AI startups developing solutions, how can regulators keep up with the pace of technological development without hindering innovation? After all, regulating social media has been hard enough. Matovski says tech companies need to think in terms of handling risks and operating transparently. “We want to be in front of the editor,” he says. "And we want to have a model that can be explained to regulators." The government, for its part, aims to foster dialogue and collaboration between regulators, civil society, and AI startups and scale companies. At least it says so in the White Paper. Room in the Market Part of the UK Government's intention when framing its regulatory plans is to complement an existing AI strategy. The key is to provide a fertile environment for innovators to engage and grow in the marketplace. This raises the question of how much room there is in the market for young companies. Recent publicity surrounding generative AI has focused on Google's Bard software and its relationship with Microsoft's Chat GPT creator OpenAI. Is this a market for big tech players with deep pockets? Matovski doesn't think so. “AI is pretty big,” he says. "There's enough for everyone." Pointing to his own corner of the market, he argues that "causal" AI technology has not yet been fully utilized by larger players, leaving room for new businesses to take market share. Is the challenge for anyone working in the market to build trust and address the real concerns of citizens and their governments? Gotopnews.com