Video: jeff kuzmich - Migrate_HR_Use_of_AI_11122025_5115993 | Duration: 3594s | Summary: jeff kuzmich - Migrate_HR_Use_of_AI_11122025_5115993 | Chapters: Welcome and Introduction (1.36s), Certificate and Credits (98.415s), Introducing Today's Speakers (123.899994s), AI Comfort Levels (196.35501s), AI Workplace Considerations (284.28s), AI Regulation Landscape (449.58002s), AI Regulation Landscape (894.285s), AI Tool Adoption (2063.0452s), AI in Workplace (2234.77s), Developing AI Policies (2739.29s), Q&A and Conclusion (3101.34s)
Transcript for "jeff kuzmich - Migrate_HR_Use_of_AI_11122025_5115993":
Hello, everyone, and welcome to today's webinar, HR Use of AI Best Practices for Legal Compliance. I'm Rob Parsons, and I'll be your host and moderator today. So before we get started, I just wanna cover a few housekeeping issues. Note that the audio is gonna be streaming through your computer. There's no dial in options. Just make sure you check your speaker volume and have that all set up correctly. Also, sometimes you may need to refresh your browser just to make sure everything's working correctly. You'll see some key commands here on screen that will help you do that. So let's take a closer look at some of the other important features we have for you. First of all, you'll be able to download and print a copy of today's deck for future reference, and you can access additional resources as well in the files and resources window at the top right of your console. That's available at any time during the event. And also note that we'll be sending a follow-up email to all attendees. You'll be able to get back in and access these resources on demand. Also, if you have any questions during today's webinar, you can send them via the ask us a question window. To submit, simply write in your question and click Submit. It's that easy. We do have specialists on the back end who are going to answer your technical questions as the webinar continues. Note we also may not get to all of those today, but please be assured all the questions we receive will be helpful as we develop future resources like this for your business. Now, once you've met the required watch time, you will be able to view your certificate of completion within the EARN Certification panel. The certificate will contain pre approval codes for both SHRM or HRCI credit. Please note today's presentation requires you to attend the event for fifty four minutes, either live or on demand viewing, to earn that certificate. After today's webinar, we'll also send an email as a follow-up with additional information. Finally, please note this presentation does not constitute legal advice. This is for informational purposes only. And now, I'd like to introduce today's speakers. Adam Wright is Deputy General Counsel and Vice President of Legal Product at six fifty and a graduate of the University of Michigan Law School. Prior to joining six fifty, Adam served as a federal judicial law clerk and worked in private practice focusing on intellectual property, employment, and commercial litigation. Seth Barony is a legal product associate at six fifty. Seth attended the University of New Mexico where he obtained undergraduate degrees in economics and political science before going on to receive his Juris Doctor from The Ohio State University's Moritz College of Law. Jake Dayton is a licensed attorney and legal product counsel at six fifty, where he helps create accessible tools for employment law, compliance, and other information. Jake's background is in litigation, but his professional interests span product design, data privacy, and the responsible development and training of AI systems. So let's go over quickly what we're gonna be talking about today. Ah, okay. We're gonna go on up to a warm up poll right away. Okay. So I want, everybody to take a look at your screen and enter your, your information here. Just wanna get an idea about your comfort level with using AI in the workplace. Are you fully embracing it? You've got approved, select tools for AI for work and other training. Are you testing the waters, you still need guidance? Are you uncomfortable? Aren't there privacy and security risks? And finally, are you fully hesitant? And some are sounding the alarm on AI. So go ahead, please, mark your response. Click the submit button, I'll wait till a few of these come in, and as you are viewing these, I'd like to point out, if you are one of our Paychex HR clients and you have some questions on the specifics of how your business should apply this information, we want to refer you to your HR business partner for guidance and support. If you're not yet working with a dedicated HR professional and you want to discuss additional support you could receive on challenging compliance areas such as AI, At any time throughout this presentation, you can click the blue card at the lower right corner of your screen and request a consultation. So let's see where we've got. We've got a lot of attendees. All right, a lot of answers here. So what is your comfort level? Alright, it looks like the vast majority are testing the waters but really still need guidance. It makes sense with how long that's been in the area. There's still a number who are uncomfortable and a number who are just fully embracing it. So Adam, how does this match up with what you've been seeing in practice and seeing with your customers? Yeah, think this makes a lot of sense, right? We'll talk today about sort of the challenges of incorporating AI into your workplace. But honestly, I think if you are in this position of sort of testing the waters but needing guidance, I think you're probably in the right position. There's a lot to think through when it comes to incorporating AI into the workplace. And we'll talk through a lot of those considerations today, but it's good I think at this point to sort of take a bit of a cautious approach to how going to be incorporating AI into your workplace. And again, these considerations around that that we're excited to talk through today. Rob, I can sort of jump in and talk about our agenda, if that makes sense. Sure thing, Adam. Why don't you do that? Go ahead and drive this presentation here now. All right, thanks Rob. All right, so we're really excited to be here with Paychex today presenting on a topic that we follow closely and think a lot about here at 06:50. As we all know, AI is developing and progressing very quickly and it has already changed how some HR departments are doing their work and it will surely continue to do that in the future. This of course comes with several legal implications, which we're going to talk about today. I think it's both a really exciting and also a bit of a scary time to be in HR as in other professions, the prospect of increased efficiencies that AI can bring to your team is really seductive, right? For instance, an AI can scan the thousands of resumes that you get for an open position. And if you have been doing that manually in the past, the idea of an AI tool doing that for you probably sounds pretty nice. These efficiencies could obviously spread through other departments in your organization, but adopting the use of AI in your organization should be done deliberately and I think with a lot of careful planning and thought. So today, again, we're going to be talking through a lot of those practical considerations that you need to think through, which will hopefully give you the confidence that you need to adopt a thoughtful and legally sound approach to AI in your workplace. So I'm going to start us off by talking about some legal updates related to the regulation of AI in the workplace that HR departments should be aware of. I'm then gonna kick it over to Seth. Who's gonna jump into some of the details of some of those new laws and regulations that we're seeing. And then finally, Jake is gonna discuss creating an AI use policy in your workplace as well as using AI to do your job more efficiently. And then we'll try to leave a few minutes at the end for Q and A. Okay, so with that, we're going to jump right in and talk about some of the latest movement we're seeing in AI regulation. So the basic pattern that we see when it comes to regulating AI is pretty simple. As AI use increases regulation follows. And we see that pattern in lots of areas of the law and employment law. So that's not too surprising. So AI is being used as we all know more and more in all areas of HR, hiring promotions, employee management. And with that rapid adoption, lawmakers are starting to respond. So states and local governments have been the first movers. They've already started introducing rules to ensure that AI is used fairly and transparently in HR decisions. So while AI adoption has exploded, the wave of regulation I think is really just beginning. And our first mover is New York City. So New York City was the first jurisdiction to pass AI regulation as it relates to hiring. So New York City passed Local Law 144 back in December 2021, and then enforcement of that law didn't begin until July 2023. This law regulates the use of automated employment decision tools or AEDTs. You may hear us say AEDTs throughout this presentation. That's what we're referring to automated employment decision tools, which are AI systems used to assess candidates for hiring or promotion. This law applies not only to jobs performed in New York City, but also to fully remote positions that are tied to a New York City office location. So if you have an office, based in New York City, but you're hiring someone in Nebraska, this law is also going to apply in that circumstance. So if your organization uses AI in hiring and has employees or candidates in New York City, this is gonna be a law that you're gonna need to pay attention to. I'm gonna just touch briefly here on some of the specifics of this law. So the goal, when the city passed this law, the goal was to ensure that AI tools were being used fairly and in an accountable way, in a transparent way to make sure that there was no discrimination in the hiring or promotion process. So under the law organizations that use these automated tools to the language of laws to substantially assist or replace discretionary decision making. If you do that, you have three key obligations. So first, the employer has to conduct an independent bias audit before using the tool. And then after that, once a year. So an audit basically is a test that's going to check for what the law calls disparate impact or different treatment among different protected classes as a result of using that tool. So this brings up an interesting question, which is how you conduct a bias test before you actually use the tool. So there's a lot of different ways that employers could do this. One, the test or the audit could look at data from past internal use if that's applicable. It look at use of the tool by other companies if other companies are using a similar tool or same tool. If neither of those are applicable, then a third party could even do a simulated test to ensure that the tool is gonna be used in a fair way. And then second employers then have to publish the results of those bias audits. So there's transparency about how the tool performs. And then finally, employers that are using these tools have to notify job candidates and employees that these tools are being used to make employment decisions. So if you're using AI and making employment decisions and you're covered by this New York City law, then you need to make sure that you've thought through how you'll provide this required notice. For potential applicants, that could be, you know, putting putting that information in a job posting. For current applicants or current employees applying for a promotion, you'll need to have a plan in place to provide those applicants with the required notice. So that's the New York law, New York City law, let's move along and talk about what we're seeing in the AI regulation race. So after New York City passed its AI regulations, two states followed suit and race to be the first state to regulate the space. And those states were Colorado and Illinois. So Colorado was actually the first state to pass a law, which was in May 2024, with Illinois passing their law a few months behind Colorado in August 2024. But Illinois law is actually going to take effect before Colorado's law. So Illinois law will take effect on 01/01/2026. So that's coming up pretty quickly. And Colorado's law will take effect on 06/30/2026. Now, I'll also note that while there's no federal law that regulates AI in the workplace, the federal EEOC has issued some guidance on the practice. Even though it's not binding and there's no federal law on this point, it is important that you're aware of that federal guidance from the EEOC, which is not surprising guidance. It just essentially says that employers need to make sure that they're using these automated tools in hiring that there's no discrimination as a result from those. There's no guidance on, or there's no requirement, you know, like New York City in conducting a bias audit or anything like that. But the guidance does also note that if you use a third party vendor to implement these tools in your workplace, that that will not relieve the employer of liability if the tool discriminates. So important to keep that in mind. So turning back to state law, let's jump in and talk about some of the specifics of the new Colorado law. So Colorado's Artificial Intelligence Act is the first comprehensive AI law in The US at the state level. It applies to employers with 50 or more employees. Notably, the law does not specifically say whether that employee count refers to all of your employees or just your employees in Colorado. This is an ambiguity that we see throughout employment law, unfortunately. Probably the most conservative and risk averse approach is to assume that that means all employees and not just those in Colorado. But this law regulates both developers and deployers of AI systems, meaning that it covers not just the companies building AI, but also the organizations that are putting it into use. Unlike some other laws, this is an important point to note, there's no private right of action. In the law, this means that individuals can't sue directly for a violation of this law and that it's up to the Colorado Attorney General's Office to enforce this law. The law again is set to take effect on 06/30/2026. So you have some time to prepare for compliance, but it's a good idea to start thinking now how you'll do that if the law applies to you. Alright. So continuing on under Colorado's law, as we said, it applies to deployers of AI. So who qualifies as a deployer under this law? A deployer is any person or organization doing business in Colorado that uses an AI system to help make decisions that have a substantial effect on things like employment or employment opportunities, education, financial services, essential government services, healthcare, housing, insurance, and legal services. So the law is obviously pretty broad, but we're focused here today on how it applies to employers using AI to make employment decisions. But an employer that uses an AI tool to do that, to screen job applicants would qualify as a deployer under the law. And let's look next at what those, if you are a deployer, an employer who's a deployer, what are your obligations under the law? So, as we talked about, a lot of these obligations apply to employers that use high risk artificial intelligence systems in their employment decision making processes. Deployers have a duty to use reasonable care to prevent algorithmic discrimination in the AI system they use. So let's talk about what this means a little bit. So the law creates a rebuttable presumption that a deployer has used reasonable care if they take several key steps. So that just is sort of a legalistic way of saying that the law will assume that you have met your obligations under the law if you've done these things. So first, implementing a risk management policy and program for high risk AI systems, which again would include make using AI and making employment decisions, to complete an impact assessment for each high risk AI system that you're using, to notify consumers or in this case applicants or employees applying for a job or a promotion who could be affected by decisions made by these systems, Publishing a public statement summarizing the types of high risk AI systems the deployer currently uses. So again, in this case, it would be the employer sort of describing the AI system that they're using to make hiring decisions. And then finally report any discovered discrimination in the tool you're using to the Colorado attorney general within ninety days of discovery. So doing each of these things will go a long way in helping show that your organization is compliant with the law if you're using it to make employment decisions. Not only helps doing So, these things, not only helps ensure compliance, but it also demonstrates a proactive approach to fair, transparent and accountable AI use. So those are the specifics of the Colorado law. I'm now gonna kick it over to Seth to talk about some more specifics of AI regulation that we're seeing. Awesome. Thanks, Adam. Yeah. So I'm gonna walk through some other changes we've seen at the state level and talk about some kind of trends we might see moving forward in regulating AI. But before I jump into the details of any specific laws, I just wanted to ask this question of what does regulating AI and eating an elephant have in common? And the answer is that both are best done one bite at a time. Now, aside from being a hilarious joke, there's a little bit of insight in that statement, I think about kind of the approach that governments are going to take when it comes to regulating AI kind of trying to tackle this huge problem that they don't fully understand. Because I think that's important to remember, right? Is governments don't really have a good handle on what AI is yet. And like, by that, I mean, what is the, how do you define the section of computer programs and technology that is artificial intelligence that is having these effects that government kind of wants to step in and control. Right. There's a lot of, of kind of leeway in what it could be. Cause we all think about, you know, how or the I robot, I robot. When we think about artificial intelligence, this idea of general, something that's able to mimic human intelligence almost. But the fact of the matter is it's, we're not there yet, right? So the kind of stuff that we're regulating now is a lot lower scale. It's a lot closer to just being kind of an algorithm or a predictive piece of software. But all of that to say, because we don't really understand what it is we're dealing with, we don't quite understand the potential harms that are associated with it or what the best approach is to regulating it. You're likely to see, states and cities and maybe even the federal government eventually try to take one particular angle and kind of address that rather than trying to address the entirety of AI as a whole at once, right? Colorado is, kind of the exception to that. Right? They their law that Adam was talking about goes straight for the entirety of AI. It tries to regulate it in every sector of the economy. It regulates people who make it and people who use it. It's very comprehensive or at least as comprehensive as we've seen so far. But, that is, I think going to be the minority approach. In fact, we've already seen some of these other states like I'm gonna be talking about here with California and Illinois, take a more conservative approach or a more focused one at least. So let's start here, jump in with California. And there's actually a few different AI laws that came out of California recently. You might've seen someone making headlines a couple months ago around some amendments they made to their Fair Employment and Housing Act. I'm gonna touch on that in a second when we talk about the Illinois law. But, there's there's an angle in existing privacy law actually, that could sort of operate as a regulation on AI, right? There's one in California in the CCPA or the California Consumer Privacy Act that basically requires businesses to make more generous disclosures whenever they use AI to make a decision that affects people. And that's gonna apply when the AI is making the decision itself or when it's substantially contributing to the decision. So, you know, if you have, some kind of an AI tool that it's producing maybe a recommendation about different applicants or, even internally, right? If you're screening employees for promotion or for, you know, who's gonna get hit with a layoff or what have you, Even if you have a person overseeing that and kind of making the final call, if the AI is is producing a an output or recommendation or something that the decision maker is relying on to some degree, at least in California, that law is going to apply apply to you there as well. And that's, and that's true with other kinds of AI regulation we've seen so far, as well. And kind of the reason of that, or, or the, the, the crux that that is getting at right is that a lot of AI centered harm comes when it replaces human decision making. Because we don't fully understand how these algorithms and how these programs are producing outcomes, at least the government doesn't feel confident that all employers understand how their AI tools work on the inside. There's a concern that applying them or using them recklessly could result in discrimination or be it accidental or intentional. And I think honestly, accidental discrimination is maybe the most likely that we're gonna see at this stage of of kind of AI's development. So a lot of states privacy laws, we're up to 19 or maybe even 20 now, restrict how businesses use AI or automated decision making technology as it's called. But so far only California's applies to employers or applies to data of employees like using AI employees. I, you know, it's kind of an open question whether that's going to change, but it's still helpful to kind of keep this in mind as it could be, we could see some other States kind of use this as a model or other States with privacy laws kind of expand those provisions to apply them, to employers moving forward. Just a quick note here on what we mean when we talk about significant decisions, right? This list looks very similar to the one, on Adam's slide, right? Talking about what Colorado cares about. These are gonna be the phrase that comes up a lot are decisions that have legal or similarly significant effects on someone, right? And in our, for our context here today, really all that, all that that really is gonna matter for is the employment opportunity piece, right? So things like whether you're hiring someone, firing someone, promoting them, demoting them, changes in compensation, changes in, you know, who gets shifts, all that kind of stuff that is affecting their ability to and their opportunities to pursue employment. That's the kind of stuff that matters. If you're using it for very minor decisions or decisions that don't affect employees, like let's say, you are using AI to decide who, which consumers are gonna be served with a 10% off coupon or something like that. That's not significant. And so in most cases and certainly in California, that's not really gonna matter, or the law is not gonna regulate it the same way that it does if you're using it for important stuff. So that last bullet there has some areas that you wanna keep an eye on for kind of, identifying where AI tools might exist that would be regulated. So, you know, AI interviewers, resume screening tools, that's probably the biggest one. Probably a of those tools have been kind of accused of being significantly biased. If you have, if anyone's been following the Workday situation, it's just kind of the latest example. Schedule creators, anything that kind of calculates raises or compensation. Again, any tool that, that is used to make decisions or in connection with making decisions that affect people's employment or compensation, is gonna be covered. So moving right along, let's talk about Illinois, which I think is probably the if if if I had to pick one of these, one of the regulations we're talking about that I think is gonna serve as the blueprint for at least the first wave of AI regulation, it would definitely be, the, the approach that Illinois took, which was done back in 2024, but it's kicking in on 01/01/2026. So, it's good to kind of keep in mind now. And what they did was rather than come out and write this whole new law like Colorado did to try to regulate AI kind of fresh. They amended the existing Illinois Human Rights Act to basically just say, you know, and that's where all the, all the provisions about, banning discrimination in various contexts are, are held. They basically just added this little amendment that says, hey, using AI to discriminate in connection with any of these types of decisions is still discriminating. Right? You can't get out of a discrimination claim by virtue of saying you used AI or that you didn't understand what the AI was doing. So the kind of the two pieces of what of what Illinois did, I think are interesting there is that's a very low kind of stakes approach to this, right? It's very non controversial to just say that you shouldn't or employers shouldn't be using AI to discriminate. So I think it's very easy to get something like this passed. Whereas Colorado in the, you know, these bigger AI laws face a lot of opposition from tech companies and from tech lobbies. And so they're a little bit harder to kind of get through. Whereas, you know, this targeted approach is very simple. And then the second piece that's interesting there is that they've added, they've said that, you know, you can't discriminate based on any predictive characteristics or zip code. Because a, what Illinois kind of came to the conclusion that businesses were able to use zip code as an effective proxy for a lot of different protected characteristics, you know, because folks, excuse me, because folks with similar religious backgrounds, similar, racial heritage, similar kind of socioeconomic status often live in similar places. And so you can say, no, I'm not discriminating based on religion or race or class. I'm discriminating based on zip code. Illinois is saying that's not a kosher anymore, and it's probably not gonna be the last state to do that. So if you do have any AI tools that are kind of geographically discriminating against folks, You wanna be sure to try to keep them from doing that at the very least, insofar as it applies to, Illinois. California, other change I mentioned a second ago, also did something very similar. That was just, I believe took effect on October 1, where they basically just issued this package of regulations saying, hey, existing California anti discrimination law applies to AI. And I think it's very likely that we will see a number of other states take that same approach, here in kind of the months and years to come. So that's kind of a quick rundown of the of the general status of AI regulation in The United States at this point. I think it's reasonable to kind of come to this after we've said all these things and all these risks that are associated with the technology. It's reasonable to look at that and say, oh no, does this mean I shouldn't use AI at all? And while I think that's kind of a natural, conclusion, it's not, it's maybe a little extreme, right? You don't need to abandon the use of this technology at all because it can be quite helpful, right? It has the ability to shave a lot of time off of work people are doing and generally just kind of streamline operations. But what this highlights is that it's really essential to be familiar with the laws in the specific areas where you have employees. Because, and this is a question we get all the time. In general, these laws are going to apply to using AI on employees based on where that employee works, not where the business itself is, headquartered. Right? So just as by way of example, six fifty is headquartered in Utah. I am based in Pennsylvania. So any, any Pennsylvania employment laws or Pennsylvania AI regulations would apply to six fifty with respect to their use of AI on me specifically. Right? Whereas some other employees like Adam's in New Mexico, New Mexico law is gonna apply to Adam, and kind of so on and so forth. So you really wanna know what the law is in the jurisdiction where you have employees and you wanna be sure you understand how that law is evolving, and make sure that your your use of AI doesn't run afoul of any of those, any of those laws. And a total ban on AI is is extremely unlikely at this point. You know, maybe some states or some cities would go so far as to ban its use in these kind of high stakes, significant decision making areas, but nobody's gonna come out here and say employers cannot use AI, barring a seismic shift in in the political climate, kind of nationwide. So, you know, in terms of some general tips that we can give you, these are some good, some good best practices that come from what kind of common requirements among the laws we've seen so far. This first one is gonna be far and away be the most important and the most helpful, at least for the laws we've seen so far. And that is to test your tools for discriminatory impact. You may also hear this called a, bias test or bias audit on your AI tools. And what that essentially means is you just want to run almost simulations of the AI tool, in order to, and then take the outputs and see if there's any correlation based on protected characteristics in the input. So like, as an example, you know, you would, if you have a resume screening tool, you feed it a whole bunch of resumes that are identical, but all you do is change up the, you know, the racial background or the age of the, or the religious background of the applicants and see if that's producing more, you know, negative recommendations or more positive recommendations and all that kind of stuff. A lot of times the developer of an AI tool, if you're using someone else's will run these kinds of bias audits before they release it and some places they're required to. So, you know, if you don't feel comfortable, trying to audit your own tool, it might not be a bad idea to reach out to the developer and see if they've already done this or kind of ask in general what steps they took to make sure the AI is not being discriminatory. And that's gonna take you a very long way, right? Because Adam's law that he was talking about, that'll entirely do that rebuttable presumption to that benefit of the doubt, which can be huge. And in these more targeted discrimination based laws, you know, a good bias ought to ideally is going to ensure that you don't use it in a discriminatory fashion. So that's, that's a great first first step. Also making robust disclosures is always a good move because that, that, that will show a regulator or a government entity that comes and maybe wants to investigate you that you're, you're being candid, right? You're not, there's no, there's no hiding of the process. It's not that you are doing something illegal and trying to cover it up, you know, and that can go a long way. And again, can be part of these kinds of safe harbors or these, rebuttable presumptions that are in some of these laws right now. A third step that is no law really requires this outside of California's privacy law that we talked about at the kind of toward the beginning, but allowing certain employees or consumers to opt out of the use of the AI tool. That might not be workable, you know, it's kind of an essential part of your hiring process, for example, then that is what it is. But that's another one of those, like it shows that you're valuing the protection of the employee's privacy and kind of their choice and their ability to not have their, situation be affected by an AI tool. Again, not required at this point, but could be required down the road and we'll go a long way toward avoiding any, potential issues. So in a nutshell, just make sure you understand the logic behind any AI tools you use and test them for bias or make sure that other folks did, for you. And the last thing I have here before I kick it over to, I think Rob for a poll and then over to Jake to close this out is kind of what's next. AI is a very hot topic right now. There are there is a lot of working groups, are kind of just, like, little legislative bodies that are trying to develop potential proposals for new laws. Over 30 states have these things, and they're and they're issuing reports and recommendations, which are probably gonna become proposed legislation and eventually, enacted legislation. California, New York would be our bet for the next ones to try to take a comprehensive bite at the apple. California came pretty close this session, but didn't quite get it across the finish line. And New York's had a few proposals in that area recently as well. So and last piece here is just future changes might come from privacy or human rights law because that's kind of the the two existing bodies that we've seen, give give rise to AI legislation so far. So keep an eye on States that regulate that a lot as in those States that have comprehensive consumer privacy laws. All right, well with that, I think I'm going to kick it over to you Ram to give us another poll. That's fantastic. Thank you, Seth. And thank you, Adam. A ton of great information there. So I just want to get a feel of what type of AI tools would your business most likely consider putting to use. So what's on the table right now? We know that a bunch of you are using AI right now, but what's coming? Are we looking at customer service chat, call bots? Are we looking at marketing copy and or graphics? Are we looking for items to do supply chain and inventory controls? Are we looking at data analytics and insights? Or is it other? What else is coming into play? I like this poll question because while this topic is AI and HR, we know that AI permeates all aspects of the business, or will eventually, which means that there are going to be impacts on employees and impacts on HR therefore. So I'd love to get a feel for where people are coming with their AI tools. We're about 24% right now, so please hit your polls. Let's get us up to about 50% and see where we're at there. So we got customer service, we've got marketing copy, we've got inventory, and we've got data analytics and insights. Alright. Good. So we've got a we've got a nice read here. So let's see what we've got. Alright. Wow. Data analytics and insights, which I know here at Paychex is a big, a big lever for AI, a big tool, a big place to drive insights, to visualize this data, to compare it, to really get the most out of it. Jake, what are you seeing? What do you think about this data analytics and insights? Makes sense to me. What do you say? Oh, Jake might might be on mute mute like I tend to do sometimes when things happen like this. And Seth, if you wanna weigh in. I'll give I'll I'll I'll a thought or two. I'll I'll jump in while Jake unmute himself. I'm I'm not surprised to see this at all. I think insights really are the number one thing that businesses are looking to use AI for right now. Right? It's that let's look at this heap of data and give us some idea of what we can, like action items that we can use to kind of operationalize and improve our function. I would kind of expect to see a little bit more, customer service call chatbots, Honestly, that's another one that we're seeing a whole lot because that's kind of easy to do, especially if it knows your system well and knows your products. It can answer a lot of pretty basic questions without having to take up customer service rep time. Excellent, thank you. So do we have Jake back? I know Jake has been also spending time on the back end helping answer a lot of these questions that are coming in here. I hope so. Are we are you getting anything? There we go. I hear you when I see you, Jake. Oh, good. Alright. Welcome. Thank you. Yeah. So I'll be, talking about how to use AI in your workplace, talking about policies, what this means for your company. So we'll just go ahead and dive right in. Thank you, Rob. So this isn't so much of an employment law question as sort of this existential dread that's hanging over a lot of people. Is AI going to replace my job? And the answer is yes and no. As with any new technology, AI will definitely replace some jobs, but it'll also create new ones. That's not to say there won't be any disruption. You know, going back all the way to the industrial revolution, if you've heard of the term Luddite, now we just kind of use it to mean someone who resists adopting new technology. But in the eighteen tens, they referred to this group of textile weavers. You know, they did it by hand in England whose jobs were being replaced by these new automated looms in the factories, which were being run by women and and children often. And so these Luddites would go around and break into factories, smash looms, rough people up. So that's just to say this kind of disruption technology disrupting industries goes all the way back to, you know, as long as there's been technology and people being upset by that disruption. Those factories obviously created a whole new set of jobs. Hopefully, the jobs created by by AI aren't as dismal and exploitative as working at a dark and dirty loom for in the deafening noise for fourteen hours a day. Maybe some of you feel like that is what your job is right now. But the most probable reality is that AI won't destroy demand for labor. It'll just make new ways for labor to be to be used. In addition, AI will help highly skilled workers to be more efficient and competitive against those who don't adopt the technology. Going back to the Luddite example, it might even give you a leg up on someone who's more skilled than you who isn't using that technology. So it's important to understand how best to use AI to play to your strengths in whatever industry you're in. We'll just talk about three key concepts on how to do that, on how to use AI effectively. The first is that it's important to understand understand the strengths and weaknesses of the tools you use. Right now, most major AI platforms are essentially the same in terms of accuracy, speed, etcetera. Although yeah. If you look at OpenAI chat, JPTs, they have some hilarious graphs where, you know, the the range starts at a 100 and I you know, they're they're obviously manipulating the data to show that they're so much better. But the reality is everything's about the same right now. The important thing to remember is that AI is just a tool. It's not a cure all. It's not an omniscient sentient being that's you know, Skynet isn't gonna take over anytime soon. It's just a piece of technology, and like all other technology, it's only as effective as the person using it. Building on that idea, it's important to make sure you're always reviewing the AI output for accuracy. Now that doesn't mean that you have to review every single thing your AI does. For example, at 06:50, we have an Ask AI feature chatbot where you can ask questions to our AI tool on HR law. And we get hundreds of questions every day. So we obviously can't review every single one before those questions reach the customer, but we do do a weekly review of a sample of those questions just to make sure that it's getting things right, flag any problems. And I'm I've actually been really impressed at how accurate it is. And finally, just make sure that you understand how the tools you're using work so that you can audit them. If there is ever a problem that someone has with your use of an AI, heaven forbid, a lawsuit pops up, you want to make sure that you can explain your usage clearly, then you've done those bias audits like Seth was talking about, and that this program isn't just running on its own making decisions that no one knows what it's doing. I think I saw in the chat someone asking about what AI is the right AI, what tools are the best. I'll just repeat that, again, they're all essentially the same in terms of accuracy, speed, output. If you're a frequent user, you'll know that free versions do some tasks well but struggle with others. So, if you're planning to use, AI at enterprise scale, it's always worth it to, get the paid version. For example, we use ChatGPT and an integration with ChatGPT, and that allows us to make our own they call it your own GPT, basically, your all kinds of ways that you can improve and prompt the the model in ways that aren't available in the free version. And it's also just does tasks much better. If you've ever tried to get a free model to analyze a spreadsheet, you'll know you'll you'll know that it struggles with that. It'll just hallucinate data, just, like, put two numbers and then leave the rest of the columns blank. Lots of funny stuff. So, yeah, free versions can be useful to a certain extent. And this is just a good thing to remember with technology in general. If something is free, you are the product, meaning that companies are collecting and selling your data from from the platform. This doesn't appear to be the case a ton so far with AI companies. They actually have their own data on the back end that they're training their models on, and they want better and better quality for better and better models. So just kind of using all this chat data probably isn't the best for the, you know, quality of their models. But we seem to be in the flood the market with the free product stage of technology development. So it's likely that free ChatGPT and other models will probably disappear sometime in the next few years just because it's so so unfathomably expensive and resource intensive to run these models even for a single query. So they can't just keep offering it for free forever. So but yeah. I mean, there is some examples of some companies who have been selling chat data. The the most likely scenario is not that they're using it internally, but that if they are selling this data, that they're, you know, selling it externally. So that's just something to be aware of. If you're concerned about privacy, you can always get the premium version which has better protections or at least more privacy options. And to reiterate what I mentioned before, an AI isn't an automatic cure all. It's a tool, and the tool is only as good as its user and its dataset. Our chatbot at 06:50, we have prompted it to use our employment law database. We've built that up over years with thousands and thousands of entries on employment laws. We you know, we've tried to make it as accurate as possible. So if you want good outcomes with your AI tool, make sure that's running on a good database. And I think Rob has a poll here for us. We do, Jake. Thank you. And thank you for that great information. We've talked a lot about the different laws around using AI, how to use AI efficiently, how is it coming into place in the workplace, what we haven't touched on yet is how do you manage that? What is your AI policy for your employees in the workplace? So I'd love to get a feeling for everybody. And May, I know you didn't get a chance to answer the question on, the second poll, so hopefully this pops up for you one time and you can weigh in here on this one. The question is, has your business developed an AI policy for your employees? So we have some options here. One is we haven't started an AI policy. B is we are in the process of creating one. C means we've got nothing formal yet, but we've talked about it. We've talked about AI use. D is we have a current AI policy that we're gonna review and update as needed. Of course, you know, the final question, why would we even need one? So just wanna get a feeling for you all, so I'll give you all time to lean into this. So has your debt business developed an AI policy for your employees yet? It looks like we're getting an over. May, I hope you joined in on this one. And let's see what some of our results here are, Jake. Wow. Well, this actually isn't surprising to me. I'm not sure what you've seen out there. We haven't even started an AI policy. More than half of our audience hasn't even started it yet. What's your feeling on that, Jake? And what kind of guidance can you give our viewers today? Yeah, I mean, that seems to be in alignment with just how early stage AI is. Think people are just sort of realizing the extent of use that employees are using and just starting to think, oh, it might be a good idea to have have a policy, which is the next thing we're gonna talk about. And so that being said, with this being so early stage, there's not really and with so such limited and and laws and enforcement around the use of AI, we haven't really there's not a lot of clear guidance on on, you know, things that have to be in in AI policy. So I would just say that the the most important thing in crafting an AI policy is to think about the situations where your employees will be using it and how you want to govern when and how you want them to use AI. Obviously, if you're in one of the states that we talked about, you'll want to make sure that you include, you know, compliance with those laws in your AI policy. Beyond that, it's important to think about how your employees are using it and how you want them to use it. So for example, can your employees use AI in drafting, you know, a press release, but they can't just to get help them get ideas or get a first draft, but then they need to significantly rework it for the final product. Or you find just, you know, generating it, doing a quick review, and sending that out. Can they use it to respond to email? Maybe you're not okay with AI product going external, but you're okay with them using it internally. So, again, yeah, there's no real best practices right now, but these are the kinds of questions you want to ask. And being upfront about the policy can prevent prevent employees from hiding their use, which can be embarrassing or even costly if they're using it without any review. In my personal opinion, I think it's a little naive to ban the use of AI or just expect that employees won't use it. It's just too useful, especially in writing tasks. If you're not a lawyer and not just writing things all day, it's just way too easy to pull up Claude and say, hey. Can you draft this email? I think it's much better to have policies in place so that employees are using it responsibly, and not hiding it, making problems for the company. And so that's what I have. So what what are some good next steps to take? Stay current. This area of employment law is changing a lot and will change a lot in the coming years, come to our webinars to stay up to date on that. Yeah. That's a great place if I just want to toot my own heart a little there. No. But, really, it is important to stay on top of this because this is just gonna be changing so much. Have conversations about AI use, security, compliance at all levels of your organization. Make sure you have a bias audit in place. Start thinking about what policies you want, how you want your employees to be using this. And, also, it's you know, this is a technology that really has massive applications and can save a lot of time. So it's good to start thinking about how to incorporate that into your business, how you want to use it to be more efficient, to get better data. And like I said, make an AI use policy to make sure that your employees can use it well and use it effectively. Excellent, thank you Jake, and thank you Seth and thank you Adam. A lot of great content today and it looks like we have a few minutes left for questions, so that's great. Before we get into that, I do want to point out that HR compliance doesn't have to be a headache. Our HR professionals, backed by compliance analysts, help you navigate changing regulations and build policies that fit your business. Plus, enhanced HR library keeps you automatically updated on legal changes so you can avoid costly surprises or other potential risks. Again, if you are already working with an HR business partner, we want you to refer to them for guidance and support. And if this presentation has helped open your eyes to the need for expert guidance on the complexities of state specific issues like this, I'd love if you hit Yes on this poll to speak with a Paychex professional about how our dedicated HR guidance and compliance support can help you out. It's just a few minutes, a little conversation, no obligation, just to see what you're up against and what kind of resources are available to you to help you out. So we'll give you just a few minutes there before we get to the Q and A, and we had so many questions come in, and I'm going to try to sift through them. And some of them very matched up with each other. Sandra had asked, Where does the audit burden lie if I'm using an external tool? For instance, using a job site, that uses AI to filter resumes. I believe it was Donna asked earlier, I'm using a big service, and, I'm using them for recruiting. Is the burden on them to do all of this? Can I trust that they're doing the bias? And where does that where does the onus lie? I I I'd like to think if I'm trying if working with a big company, I can trust them, but I don't know what the law says. It's it's a good question. And I will preface this by saying there's just a little bit of uncertainty. I think we won't know for sure until, you know, these laws take effect and a court says for sure kinda where that that burden lies. But I think it would be be likely that it could cut both ways. Right? Because even if you are using another person's tool, you're still the person deploying it and to use the verbiage of Colorado's law. So if you're out there using this tool and you choose not to hire somebody because of a discriminatory reason, that is still actionable discrimination by your business, even if someone else gave you the tool. And it might well be that you could then point the finger at these guys and and plead them into the suit or something, but it would be risky. I would I would certainly not say you should just trust them to have audited and kind of walk away. Excellent. Thank you, Seth. There's a couple questions also that came in. I know we touched on it very briefly, but touched on how does this apply to internal hiring? And also specifically, if I'm opting out of tools, maybe I'm not comfortable with AI or I don't want to give them my information, but it's a part of my job. Is that a discriminatory event if I'm opting out of these AI tools? How does that work? So as long as they're opting out voluntarily, it wouldn't have any discriminatory kind of concerns there. But then you do need to have kind of an alternate process up, right? If you're using these tools for internal hiring or for promotions and you have a chunk of the business that doesn't want to be used in, or doesn't want those tools to be used on them, you still need to be considering those people for the opportunities, for the promotions, for the hiring, the same way that you would the people who are going through the AI tool. So it adds this extra burden where you're gonna have like kind of two parallel hiring processes that use different procedures, which could be a challenge. Very good. Very good. There was a lot coming in on states. Pascal asked, If I work in PA but my company is headquartered in Virginia, does only the PA law apply to me or does the company have to apply the Virginia and the PA laws in that case? And then a related question, a company has its headquarters in PA, but they have one employee in New York City. Do they have to follow that law for all their employees or just that one employee? It's starting to feel really complex here. Yeah, that's a question we get a lot and that it's really complex when you have a multi state workforce. There's a general rule of thumb when it comes to employment law that it's going to be the state where the employee performs most of their work. That's the state law that's going to govern your employment relationship. But sometimes laws have specific language in there, like the New York law, for instance, it specifically says that it applies to jobs that are performed in New York City and then fully remote positions that may be outside of New York City as long as that position is associated with, so for example, if they're reporting to an office in New York City. So generally it's gonna be the law of the state or the jurisdiction where your employee works, but there are also maybe specific language in laws that you need to pay attention to that specifically says which employers it applies to and which employees or potential employees it applies to. Excellent. Thank you, Adam. Just a question that came to me as we were reviewing this. It looks like a lot of the regulation right now is indeed at the state level, and it's evolving and it's moving. That seems to me that that's a real compliance nightmare, especially for a smaller company. What are your thoughts or recommendations on keeping up with multiple state laws in those state changes? We see this a lot in data privacy, which is something else six fifty handles where there's been no federal action in a comprehensive privacy law. So now we're up to 19 or maybe 20 different patchworks. And unfortunately, really, it's kind of a choice between, you know, for like only hire in the state you're in or the select states you're comfortable with and lose out on some talent or find ways to keep up with the law of all 50 or 25 or 30 states, you know, and that that can be a challenge. It's one of the big things we do at six fifty is kind of provide the updates to the, you know, updates to employment law straight to your inbox. So there's a plug for us right there, but really you gotta find a service that you trust and like, and that works for you. Fantastic. And I think I've got time for one more question. This is from Dawn, who asks, does your AI policy belong in the employee handbook? I think that's a pretty natural place to have it. If you have all your other policies that employees need to pay attention to and access, I think that's a great place to have it. You have all your policies in one place where employees know where to go look. But I think the important thing is no matter where it is, if it's a new policy, be transparent. Inform those employees about where that policy is, where they can find it, make sure that it's a place that they can access easily. Jake, sorry, I think I cut you off. Anything you wanted to add there? No, that's what I was going to say. Yeah, that makes the most sense. It's certainly not required, but that definitely makes more sense. Excellent. Well, I want to thank the three of you for all of this great information. Obviously, a complex and evolving topic. It's great to have experts such as you here to help us out with that. And I also want to thank all the audience that joined us today for HR Use of Best Practices for Legal Compliance. We want to thank our presenters from six fifty two. So as a reminder, you can access a printable copy of the presentation deck in the Files and Resources window. We're also going to be sending a follow-up email that you can share with anybody else you think might be interested. This whole presentation will be available on demand. And of course, if you can spare a few moments as we close, we welcome your feedback on a brief survey that will pop up. Your response will help us improve future resources like this to support your business. So everyone, thank you for joining us today and I hope you all have a great day.