Why AI Regulations Bother Me?

Why AI Regulations Bother Me?

Debate Summary and a Comprehensive Guide through AI Superpowers, Laws and Opportunities.

I was a guest speaker at the National Diplomacy Week at Jagiellonian University last month.

Turns out the diplomatic community is really engaged in the topic of AI regulations, development and ethics.

We discussed topics including public diplomacy, cybersecurity, and AI in business development. It was cool to share my tech thoughts with some international politics enthusiasts.

We also tried to cover both opportunities and challenges AI solutions may pose in the future. I prepared a small summary of the main issues we talked through. I’ve learned a lot about diplomatic world, dive deep into this with me.

What is AI? Why the hype?

Why is there so much talk about AI now? The big moment happened in fall 2022 when ChatGPT came out. People started noticing AI more, but the tech had been around for a few years. What was missing was the right cloud infrastructure and servers, plus a good commercial product to sell.

Artificial Intelligence (AI) is commonly categorized into two types: narrow AI and general AI. Narrow AI, prevalent in today's market, excels in specific tasks such as language translation or image recognition but lacks the ability to operate beyond its programmed scope. On the other hand, general AI (or Artificial General Intelligence, AGI), a theoretical concept yet to be achieved, aspires to mimic human intelligence, demonstrating adaptability and learning across a wide array of domains. The development of AGI holds significant societal, ethical, and regulatory implications, prompting global discussions on how to manage its potential impact, including issues of privacy, safety, and ethical considerations. That said, until someone actually develops an AGI system, the topic is purely academic.

Currently in the AI field, two prominent types of neural networks are transformers and diffusers. Transformers excel in processing and understanding text, significantly enhancing how computers work with written information. Diffusers, meanwhile, creatively generate new content, like images or music, by progressively refining initial concepts. It is important to note that these technologies alone are unlikely to lead to the creation of AGI. Achieving AGI will likely require advancements beyond these current architectures.

What life areas does AI revolutionise?

I think that AI-related technologies are already making a break through in our industry, similar to how electricity transformed every aspect of our lives. AI has the potential to make significant changes in our personal lives, businesses, healthcare, and even military operations.

However, the scale and pace of AI development raises concerns about its potential harm if not properly regulated. AI is already revolutionizing tasks that are meticulous and repetitive, performing them better than humans. Instead of being a threat, I think it can be seen as useful in improving efficiency and productivity.

A major change in our civilization is coming up, and we can't accurately predict the extent of its impact on our lives. As AI continues to advance, we are entering an era of transformation and I’m curious to see what it will bring.

Some say that current text AI output is mediocre at best. I think that unremarkable skill applied at scale and at a whim can change the world.

Regulatory models and AI superpowers

Even though there is an already positive impact of AI tools on people's lives, they have been misused for among other misinformation and spam. There have been privacy concerns as well. This promoted certain regulators to take actions and start proposing targeted laws. What is more previous data protection laws still apply and influence AI models.

If we wanted to divide the way AI development is progressing and what attitude towards AI regulations is presented we can mention 3 models: American, Chinese and the EU one.

American

Currently, the USA stands as the leader in the AI market. The American perspective focuses on the prosperity of the tech industry through free-market policies for startups and businesses. There is also a preference for minimal government influence.

However, with the introduction of Chat GPT, there has been a shift in attitude. Now USA is trying to avoid consolidating too much power in a few big AI companies.

Despite those concerns, there is still a techno-optimistic attitude, assuming that companies will self-regulate (which might be a bit far-fetched). Just now new AI Bill of Rights including guidance on usage of AI has been unpacked, seen as the first step toward proper AI regulatory system.

Chinese

China's next-generation plan aims to position the country as a global AI leader, surpassing the USA. The primary goal is development. At first regulations were not present, allowing companies the freedom to pursue their initiatives with funding and access provided.

The concerns started with the rapid development of generative AI. Government got scared that AI may generate content outside the bounds of censorship.

There are now regulatory experiments to still support business development while making sure the advancements will align with the vision of the communist party.

EU

In the EU, we are now finalizing the AI Act - including the anthropocentric vision of AI with a focus on citizen laws and ethics. A key dilemma is how to strike a balance. How to keep startups innovative, ensuring technological independence, but also make sure citizens and their rights are safe.

Apart from mentioned models, there are of course also other countries/organizations that try to create independent laws within themselves.

Let’s take Israel - their Director General of Ministry of Defense announced plans to make Israel an AI superpower with a focus on military applications. They put a record-high budget for AI development with the focus on laws and research regarding defense needs.

Or recently France, Germany, and Italy agreeing on unified regulatory framework AI concerning companies in Europe. They don’t want sanctions on businesses to be immediate, rather put focus on incentives, and then later, if necessary, penalties for serious breaches.

Not long ago we also saw Italy was blocking ChatGPT being concerned with lack of proper processes regarding handling personal data processing and absent age restrictions.

There’s a multitude of solutions being discussed and implemented. Some try to regulate things that don’t exist yet (AGI). Some suggest putting restrictions that seem to benefit bigger players by raising the cost of operating AI systems.

I’m watching the space with moderate interest, as it can potentially affect me. That said, even if regulation will harm the innovation, it’s still more interesting to spend time on the actual tech, not the accompanying initiatives.

What regulatory issues I faced when running my own AI startup?

Machine learning model development depends on training data. When companies talk business, a lot of the time is spent discussing how to keep data safe and hand it over lawfully.

From what I've seen, China is tough when it comes to data safety. Everything has to get approved by the authorities, and the rules are strict. Because of this, many companies don't want to do business there (also political reasons).

The EU aims to keep its users' data within its own borders. Regulations say only the EU companies and EU citizens can process it. To put it in perspective, this is very similar to what China enforces.

Banks and financial institutions have the strictest rules. In many sales conversations I had for my AI projects, things came to halt when companies said they couldn't make data anonymous or didn't have good enough policies to share user data safely.

But this situation also creates a niche for products that work on premise. We can have an AI product that can run in client’s data center, and that opens up a lot of possibilities. This is the route I chose with Sentimatic.

Poland in AI development

I bring in my local perspective and a mix of opinions I've heard during the debate.

When it comes to EU laws, Poland plays a role in the establishment process, but simply because we're a member state, so we are sort of ‘forced’ to do so. For example, the recent EU AI Act involvement was a bit disappointing - only 12 people from our country participated, half from private companies, and the rest of them directly connected with Ministry of Digitization.

Polish citizens and companies aren't showing a strong interest in AI regulations. Our current attitude is that we're more on the receiving end than actively contributing. Additionally, Poland does not have enough strong AI startups. We are in danger of falling behind other European countries.

On a brighter note, there are some AI startups like Eleven Labs which are doing well. We do a lot of AI research, but turning it into something commercial is a challenge. There's money available for AI projects in places like PARP (government agency), but the paperwork required is massive, so you have to consider if you are down to take on a lot of bureaucratic risk.

I think that even though so far we've (as in Polish engineers) mostly worked for Western European and US companies, we've learned a lot. With a bit of optimism, we can use that experience and money to figure out how to build our AI scene. So surely, I’m curious of what’s coming. And happy to contribute.

What you should do to gain the most out of AI development?

With the rapid development of AI it is easy to miss out on different upcoming opportunities. Here is what I think is useful to think of to gain the most out of this technological breakthrough in the context of this debate:

Securing Funding:

  • Explore private and government initiatives to fund your AI projects.

  • Be mindful of the consequences of getting funding. As a specialist, you can bootstrap a small product as well.

Being Aware:

  • Stay informed about new AI solutions, but avoid getting swept up in the hype.

  • Keep an eye on government regulations that could impact your rights and business.

Looking for Job Opportunities:

  • Look for emerging roles in the AI industry and find your niche.

  • Use AI tools to streamline tasks, as companies are still trying to integrate AI.

Thinking of Privacy:

  • Protect your data by applying privacy settings in AI chat applications. Pay for avoiding your data to be used for training.

Balancing Personalization and Automation:

  • Use AI for content creation but not in a direct way. Be inspired. Get criticism. Always add value.

  • Always check the quality and relevance of AI-generated content.

  • Approach AI tools with caution, they make mistakes with striking confidence.

Conclusions

AI development is unavoidable. It is better to prepare ourselves for it rather than avoid the topic or threaten the society with a possible danger of new technologies. This technological revolution can be different than others, but fear mongers have been among us since the beginning of humanity. Sure, regulations are needed and we need to protect societies from AI-driven collapse but decisions should be evidence-based, not fear-driven.

From my personal perspective I hope to see Poland becoming more active in establishing AI laws and I'm closely watching any new regulations coming out which might affect me personally and professionally.

I think the best thing we can do right now is educate ourselves and try to gain the most out of the whole thing. It is silly to let some of the opportunities pass us by and not use the whole potential (generative) AI has to offer.

Your thoughts? More worried or excited? Regulation: BS or necessity?

Did you find this article valuable?

Support Karol Horosin: AI, Engineering & Product by becoming a sponsor. Any amount is appreciated!