This world is changing quickly, and so is security!
Advancement in technology allows people to interact and connect with anyone, anywhere, at any time. Globalization is accelerating in such a way that today, countries, people and cultures are getting more and more connected than ever.
Among such technology advancements, artificial intelligence is revolutionizing the things in a dramatically. The World Intellectual Property Organization reports a triple increase in number of AI registered patents from 2013 to 2017. According to Bloomberg Law's Global Patent Database, the number of AI-related patents has increased from 3,267 in 2017 to 18,753 in 2021, a sixfold rise!
This significant growth of patents results in an explosion of AI-based products, apps and technologies. These ultimately transform our daily lives and human-machine interaction. The most surprising thing is, this is not even a new trend! AI has always been around, it's just the evolutions that are occurring so fast.
Too much power in one tool?
Forbes estimates that AI contributed $2,000 billion to global GDP in 2018. It represents the largest commercial prospect in today's rapidly changing economy. It is even estimated that it could reach $15,700 billion by 2030.
A very recent AI tool, machine-learnt natural language processor (NLP) ChatGPT, has passed 1 million users in just 5 days after its launch. In January 2023, there were already more than 100 million users and 1.6 billion visits in March 2023. This event raised widespread worries about security, privacy, intellectual property, and legal threats.
AI is already enhancing people's lives. However, some are still concerned about the hazards of unexpected behavior. Thus, the idea of "alignment" is critical in ensuring that AI is consistent with human values, ethics and security. As Chat GPT-4 was examined for harmful applications by OpenAI, it was discovered that it may propose illegal and dangerous solutions. Following this result, many adjustments were made to the public launch of the tool. However, it is still difficult to identify all potential abuses. After all, these systems that are based on machine learning continue to learn.
Jack Clark, the founder of Anthropic, warns that each new AI system introduces unexpected capabilities and safety risks that are becoming more and more impossible to forecast. The fundamental procedures used to construct these systems are well established, but other organizations, nations, research institutes, and rogue actors may not take the acurate precautions.
Any individual or legal entity, is trying to the get value out of AI, whether to improve our life processes or simply to stay competitive in the business world. As an evidence to this matter, Statista reports that the investment in artificial intelligence reached $93.5 billion in 2021. That's more than 8 times the amount from 2015. As mentioned earlier, AI has already started its journey and only the dramatical changes catch our attentions once in a while.
The power of artificial intelligence has been widely demonstrated in a commercial context. AI systems may deliver beyond the predefined outset value. Massive computing capabilities reduce data processing time. This is how we can uncover new answers to any prospective subject that previously had no associated questions!
Adapting security to a changing world
The expansion of AI tempts more businesses to embrace it, jump on the AI superhighway, and begin employing products broadly without guaranteeing the accuracy of outcomes. Even while individuals and organizations may or may not use these technologies at their own risk, they are nevertheless obligated to keep up with such developments and must take an effort to adjust or rather balance their evolutions in terms of security, privacy, intellectual property, and legal concerns.
Notably, security is one of the facts at the front line to be impacted the most by AI systems, and along with that data security is becoming a heavy responsibility, but who will be exactly in charge of security of those data? Given that such systems are giant transformers and generate more immense data than ever, leading to more environmental complexity which, in turn, increases risk.
It is evident that distinct risks increase as something becomes more complex. This may bring an opacity for security defenders, but paraphrasing Charlie Bell, VP of security in Microsoft"AI will empower defenders to see, categorize, and interpret far more data much more quickly than ever which may not have been even feasible with big teams of security experts so far".
According to an ISC study, an additional 3.5 million cybersecurity workers are now needed to secure assets. This is good news for cybersecurity specialists! AI developments can thus offer employment opportunities or even better cybersecurity management.
The limits of cybersecurity
CEO of Trusted AI an OutSecure Inc company, Pamela Gupta, believes that cybersecurity and privacy are not enough! "What do we do need is trust and defined frame work on a risk-based approach"
When Artificial Intelligence is going to take bigger decisions for industries and these decisions will make the rules, to get the right value out of it, building trust and transparency in AI become crucial for businesses. Trust relies on security and privacy, but it's important to note that risk analysis is also at the core of cybersecurity.
To do a risk assessment, we firstly need to perceive environmental business operations and then conduct a threat model to determine what are the risks, where do they come from and to understand what is required to handle such risks.
Pamela Gupta, the cybersecure strategist, describes the fundamental pillars of a trustworthy AI, as security, privacy, ethics and transparency but in addition to these pillars she adds more pillars that are needed to govern the things looking into the foundation, which are explainability regulations, audit and auditability as well as accountability.
The Human Bias
We often talk about data security when it comes to cybersecurity in AI. Yet, cybersecurity goes beyond just data poisoning.
To apply the pillar-based approach it is indispensable to understand how these AI system function and are built. As an explicit view, these systems work through training sets and data provided to them, where the bias comes from, because you have to feed them right and accurate data. If you feed them with data that is biased, that is one component that can go wrong. Besides we all know that there are erroneous data publicly available that can be scraped and trained not necessarily in a precise and evaluated way.
It should also be noted that even training sets made by algorithms can also have bias in their operations. As previously indicated in the instance of ChatGPT's evaluation before going entirely off the rails, it is always difficult to discover all potential misuses for these machine-learnt AI systems, which are only a subset of AI.
A new era has begun
The age of AI, security, and data is broad enough to delve into and examine indefinitely, yet one reality remains that we are all profoundly involved in this rapid-changing period whether we as individuals, businesses, and a community are thrilled about it or not. The credit management industry has no exception in facing these immense developments.
In My DSO Manager, while we always aim to stay tuned to all new technology developments, we have always been taking implementation precautions at the top of our new evolutions, whether its related to AI automation, data collecting of trade receivables analysis or simply optimizing algorithms. We believe that our long-path risk management expertise provides us with a leveraging opportunity in this fast-changing period to better undertake such frameworks of business risk assessments while beginning to use AI-based solutions.