Readers of my previous blog, which sought to debunk some of the myths around robotic process automation and artificial intelligence, were hopefully convinced to not be fearful of an impending jobs apocalypse.
However I hope they were not led to believe that a move to RPA is all smooth sailing and happy faces, because the truth is there are numerous processes and rules to consider before your business takes the plunge.
Not least of these are the potentially significant legal implications that could arise from inviting robots to take charge of the inner workings of a business.
Even though it is a fledgling area for many organisations, we have already encountered numerous legal conundrums as clients work through the changes in the way they engage within their existing ecosystem.
Some of these legal implications are specific to an industry vertical or local regulatory settings, but I hope it is helpful to lay some of them out so others can ponder what it could mean for them.
I’m hoping this article will open eyes a little wider within organisations looking to proactively prepare and make provisions for their journey into robotic process automation and AI.
Exploring the Importance of Regulatory Considerations in the Implementation of RPA and AI
Organizations must have regulations and guidelines when using RPA and artificial intelligence to ensure responsible and ethical use. Organizations should evaluate potential bias and biased algorithms, inform customers about the scope of these technologies' usage, consider workplace displacement concerns from automation, guarantee data privacy and security compliance, and establish a clear communication policy for customer interactions. The EU General Data Protection Regulation (GDPR) and US Federal Trade Commission Act are primary examples of regulating authorities needed in this domain. Companies must exercise transparency when ascertaining decisions by integrating robotic process automation and AI into their business processes to prevent harm or unjust exposures. Additionally, they must create accountability measures to address any ensuing repercussions of deploying these technological tools.
Ensuring that using RPA and artificial intelligence is ethical, responsible, and in line with legal RPA standards is essential. To help meet these requirements, frameworks from specific organizations and industry-wide standards such as the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems have been developed. Organizations need to consult legal and ethical experts to maximize the benefits of these technologies while minimizing any potential risks or negative impacts.
What types of legal implications exist within RPA and AI strategy?
Whereas leaders in the manufacturing sector worked through the impacts of automation on their organizations and their partner relationships, this is not the case for those in the services industry.
When large amounts of core processes and activities become automated by software, the software itself becomes the differentiator between the performance of rivals in any given industry.
Therefore the question of who owns the soul and who owns the body of software code or algorithm becomes a fight worth having, and a question that needs to be resolved before it becomes business critical.
There are enough ambiguities to keep lawyers busy, and they are as follows:
- Who own the software code or algorithm?
- In which jurisdiction is the service being offered?
- Who must take responsibility for maintaining the code and updating algorithms?
- What security features must be inbuilt to prevent fraud?
- Who takes responsibility if the software or algorithm fails to work as designed?
- What defines delivery of a service of software or algorithm, in order to bind a commercial relationship in contractual agreement?
- What legal recourse does a consumer or third party have in the event of a dispute over automated services?
- When and what needs to be changed in any PDS?
- What are the obligations and disclosure requirements by players at each of the ecosystem, through to the end user?
- What are moral and legal responsibilities of local jurisdictions to regulate and monitor activities?
How to mitigate these legal RPA risks?
Through our RPA and AI related advisory work we are coming across different types of legal challenges as the journeys of different client’s progresses.
In some cases company leadership are clearly adopting RPA in an aggressive manner and looking to run their RPA and AI initiatives in a similar vein to a business efficiency drive. For them the main focus is on the impact on numbers.
Alternatively we are also seeing some more conservative adoption from management teams, who are keen to address concerns about change management and loss of human touch within their operations as a result of adoption.
In both contrasting approaches we are however seeing that the legal implications of the changes are receiving insufficient attention. In order to assist these clients we have worked with our legal partners to design a legal RPA framework, which works to mitigate RPA/AI risks for management for automation
It is an evolving journey, and as with all technology-led change the issues that must be addressed emerge over time. However the rate of adoption and development of RPA and AI indicates that legal complications will emerge sooner rather than later … so let’s start the conversation and sharing ideas now.
Founder and Exec Chairman
Thought Leader | Trusted Advisor | Innovator