Skip to main content


Social Innovation

Q&A: AI is transforming the workplace in novel ways

Source: Digital Journal

One specific change is with the recruitment process. Experience is starting to show that artificial intelligence, when employed used ethically, can positively impacting on the hiring process when it comes to recruitment, from the employer perspective, and also with the employee experience. To gain an insight, Digital Journal spoke with Joe Hanna, chief strategy officer at Workforce Logiq.

How is artificial intelligence disrupting business?

Joe Hanna: Artificial Intelligence (AI) and predictive data science give employers a critical edge to optimize their workforces – which is especially critical in the midst of the most hyper-uncertain labor market we have experienced since the Great Recession. Employers need resources and context to make fast, accurate, cost-effective decisions. During hyper-uncertainty, organizations with access to AI and predictive data-science insights have the edge they need to anticipate and plan for optimal workforce planning. For example, many organizations are making the difficult decision to reduce their workforce through layoffs and furloughs to deal with pandemic impacts. But identifying and retaining key talent with the skills required to navigate the economic storm is also strategically important.

AI provides employers with the context and the benchmarks they need to make data-driven, insightful decisions to accurately forecast turnover, proactively fill and plan for talent supply and demand gaps, decide on the best markets to source talent, decipher which roles can be successful remotely, and more.

Which types of businesses are most impacted?

Hanna: Nearly every business across every industry can benefit from an AI-based workforce management approach. In light of the pandemic, every city, state, region, and job function is experiencing labor market shocks. Our proprietary AI models track more than two thousand events, triggers, and shocks that can affect employment volatility, based on more than a billion data points and 40,000 sources – every month.

A few insights from our Workforce Management Benchmark Report: Q1 2020 include: New York, at 27% above the national average, is now the state with the highest workforce volatility ranking – replacing the District of Columbia. New York is also the epicenter of the US COVID-19 pandemic with the highest number of cases per capita. San Francisco (36%+), Seattle (23%+), New York City (15%+), and Boston (14%+) are the cities with the most volatile workforces across the nation.

Not surprisingly, accommodation and food services experienced the largest percentage change, up 11% vs. 2019. The shelter-in-place orders and organizations cancelling all work-related travel has greatly increased worker volatility in this industry. During the pandemic, organizations that rely on certain types of talent are more at-risk of turnover than others are, and should consider leveraging AI today, if they are not already. Our data reveals that historically stable job categories are increasingly volatile, including critical roles. For example, job categories with the highest percentage point increases in worker volatility in Q1 2020 compared to 2019– include Teachers (+12 percentage points, from 76% to 64% below the national average) and Healthcare workers (+7 percentage points, from 67% to 60% below the national average) – including Nurses and Doctors (both slightly at higher risk). While still categorized as low volatility categories, this movement has implications on future risk with these COVID-19 front line workers. Software Engineers are highly open to unsolicited recruitment messages – in fact, they are 81% above the national average. It's important to note that this is down from 105% in 2019 – indicating these professionals may have a somewhat higher likelihood of staying at their current organizations to help navigate to the other side of the pandemic – but they remain in high demand and may be swayed with the right offer.

Amidst the growing uncertainty, companies need to invest in retention and talent acquisition to build a sustainable, reliable talent pipeline – especially in highly competitive fields. The predictive nature of AI can add tremendous value and insight as companies hone their strategies.

What forms of artificial intelligence are most impactful?

Hanna: The most impactful forms of AI are the ones that are predictive and prescriptive in nature. Predictive AI combines machine learning, modeling, and other technologies and applies them to vast quantities of data to identify patterns and trends to uncover what might result in the future. This proactive approach gives companies the power to foresee potential trends, challenges and risks to prepare or even prevent the issue altogether. Prescriptive AI takes predictive analytics one-step further by using optimization and simulation algorithms, in real time, to take action – and define the future. When these two forms work together, organizations can take preparedness to the next level.

What is meant by 'ethical AI' and does this matter?

Hanna: Ethical AI means that the outcomes and decisions made by leveraging the algorithms are fair and free from bias. The ethical use of AI is arguably even more critical in workforce management than other use cases because discrimination and bias – such as only recruiting from ivy league schools, choosing men for leadership roles over women, etc. – has been a pervasive issue for many organizations. While AI doesn't create new bias, it identifies and takes actions based on patterns, by design, so the technology can perpetuate biases already existing within organizations. The algorithms are only as good as the data on which they're based, which means to avoid ethical issues and ensure the technology is facilitating positive outcomes, AI must be deliberately and carefully marshalled in order to ensure positive impacts for employers—and the individuals they recruit.

When it comes to organizations' specific AI ethics and implementation concerns, a recent Workforce Logiq survey uncovered personal data and confidentiality issues topped the list, followed by regulations and policies readiness, AI being used to profile humans, not having the right skills to implement and manage AI, unintended AI bias and prejudice, and displacing the human workforce with machines.

How can AI be designed so that it remains ethical?

Hanna: The number one thing companies can do to ensure the ethical use of AI is to implement a solid data collection and mining process. There are certain guidelines developers must follow to test AI and make sure it is free from bias.

A 4-part validated discrimination test (plus annual retesting) that is designed and reviewed with outside counsel is key: “Ensure parameters are free from bias, meaning qualifying filters such as race, age or gender, not critical to the hiring process are removed.” “Check historical data for hidden bias, such as historical positions being held by men, recruiting from ivy leagues, only, etc.” “Audit results for significant disparate impact, including a lack of minority representation in a group of applications the technology identifies as qualified.” “Track how users utilize the AI results and make sure they're not perpetuating bias, even accidentally.” Ethical AI is also tied to good stewardship over privacy and data protection – not using candidate-level social media, online search history, or other personal or protected class information, and adhering to rising data privacy laws such as the California Consumer Privacy Act in the U.S., and General Data Protection Regulation (GDPR) in the EU.

Is an international framework needed for ethical AI?

Hanna: Absolutely. AI models are free from geographic boundaries – given the scale, scope, and access to information. And we can all learn from each other as we work to advance this area of innovation and responsibly handle consumer information. Authorities across the global community are all getting involved in the AI conversation to ensure safe and ethical usage. For example, Europe has instated the General Data Protection Regulation (GDPR) to help give EU citizens more control over their personal data. A similar legislation is underway in California. Other states in the U.S. are beginning to regulate AI as well, such as Illinois with its introduction of the Artificial Intelligence Video Interview Act, which requires employers to notify applicants when AI analysis will be used during interviews.

Recently the pandemic has shown that we're truly interrelated, and we need to broaden our perspective without sacrificing the values that are priority to our organizations. In the context of AI, an international framework can set a minimum standard – and then businesses and local and regional municipalities can choose how to go above and beyond depending on their priorities and values.

This article was from Digital Journal and was legally licensed through the Industry Dive publisher network. Please direct all licensing questions to