I partner with online-learning and training site HR Jetpack to deliver courses and webinars on Cybersecurity and Artificial Intelligence topics specifically for HR professionals. In my latest webinar I had a really engaged audience who asked some fantastic questions which are worth sharing.
My organization uses AI to replace in-person interviews and I’m concerned that we might be using AI too-much… what are your thoughts on striking that balance?
It is critical to always keep in mind the motivation for why these processes are in place. The point of the interview is to filter and find the best possible candidate. So if the AI is doing that better than a person could, then it is the right tool for the job. If it isn’t able to do that yet, then it has been pushed out prematurely.
That happens a lot with technology, either to cut costs or out of excitement and enthusiasm about a revolutionary new technique which might set the organization apart, impress investors, or move the organization in a new direction. Those moves are a mistake if they are ineffective at achieving their core goal. Never lose sight of what they are meant to be accomplishing.
In cases where an AI application is based on historical data like, for example, to drive predictive analytics, the software is not going to predict accurately until enough data is gathered. How do we then promote early adoption of AI in those cases?
Go in with both eyes open. Understand that you are essentially agreeing to be a beta tester for something that is yet unproven. You can’t be completely reliant upon the results of this new system, so temporarily you will need to run it in parallel with your existing method of accomplishing the same task.
The payoff is that by starting early, getting your team used to working on the new system and integrating it into your workflows, you might be way ahead of the competition by the time you are ready to trust it. There is a time and place to take this kind of gamble; just make sure that it can’t backfire on you and that your organization is properly protected against the risks.
How concerned should we be about all of the data that AI systems use and collect getting hacked?
We should all be very concerned. Since the dawn of the internet age that all of us have lived and worked through, attackers have had the advantage. It has always been easier to attack systems than defend them, and that’s just been an unfortunate truth of software, apps, and the internet so far. Doing that work is incredibly difficult and there is a great deal of money to be made and relatively low risk of getting caught for those who seek to break into organizations digitally. Until we see a paradigm shift in the technology landscape, this probably won’t change.
Having said all of that, the potential benefits to an organization in terms of productivity are too great to pass up due to cybersecurity fears. I would expect most organizations to invest in their defensive teams alongside their investment in AI, and there will be a surge of hiring in both fields going forward. There will be breaches in the future of AI-related products and systems however I don’t expect that to slow the momentum of AI adoption. (Whether or not it should is another question.)
I’m in favor of using AI to save me time, but I don’t want to lose control over what I do. How far is AI going to encroach into my role?
Control is a fascinating question, and one that is very different at each organization. I have two examples of ways that AI might take “control” within organizations as we give those systems more traditionally human decision-making power.
1) Being the Boss: In some offices, the boss is the person who assigns the day’s tasks to the team. The person in charge of keeping things moving and making sure that the teams are as productive as they can be. For the person doing that work, I would ask if they really care who it is that actually assigned them that work in the first place? Today, it might be their boss Sally. Tomorrow it might be a software algorithm. The important thing to remember is that job security related to AI is not a totem pole… it doesn’t relate to education level or past job history or existing job title. AI is about tasks… the task which you perform right now, and if software could replace that task or not. It’s really that simple.
2) Predictive Gambling: For a long time, Wall Street rewarded the best gamblers with big salaries and huge bonuses. Those best able to pick winning stocks and other investments were praised for their “gut calls” and “instincts”. And now that era is coming to an end. The world’s largest investment firms are taking their billions of dollars out of the hands of people and trusting their own in-house algorithms to invest it. Simply put, those algorithms are the safer bet. They make fewer mistakes and have better returns than the people did. Control is being taken away.
The idea of control is a funny thing… at the end of the day, we will always be the ones behind the keyboard, running the operation. However we will increasingly give those systems the ability to run more of the world as they prove that they do it well, and that they can do it with better results than our own. This is sure to be a controversial dynamic as we move into the future and one that will stir a great deal of debate.