In-House

Navigating Liability and AI in Contracts

Episode Summary

Jessica, Natalia and Laura discuss the complexities of commercial contracts. They zoom in on data breaches, liability, indemnification, and how AI is impacting legal agreements.

Episode Notes

Jessica and Natalia of DocuSign interview Laura Belmont, General Counsel at Civis Analytics. They discuss the complexities of commercial contracts, as well as data breaches, liability, indemnification, and how AI is impacting legal agreements.

Laura describes defining and negotiating data breach clauses, standardizing security terms, and the risks posed by AI tools.

As GC of Civis Analytics, Laura helps businesses thrive while bringing data-driven truth to organizational decision-making–for some of the world's biggest companies and the world's largest social causes

.---------

Time stamps:

02:22 - Meet our co-host Natalia

03:57 - Meet our guest Laura

06:13 - Facing litigation costs

08:46 - Negotiating data breaches

13:49 - Indemnification and liability today

20:23 - AI agreements

28:25 - Getting around negotiated CAPs

35:00 - Cross-functional collaboration

41:00 - Keep or redline?

---------

Links:

Find Laura Belmont on LinkedIn

Find Jessica Nguyen on LinkedIn

Find Natalia Jhaveri on LinkedIn
More about Docusign

Episode Transcription

[00:00:05] Jessica: Hello, hello, our in-house friends. Wow. After a long hiatus, welcome to a brand new episode of In-House, the podcast for in-house legal professionals. It's not a secret anymore, but there was a bit of a hiatus because I have inherited, a not another baby, whew, I can only make one. There's no more of those coming outta me.

[00:00:30] Another baby, a contracts baby from Netta Jefe, the founder of Contract Nerds. If you haven't heard DocuSign, the gracious sponsor of this podcast has acquired the Contract Nerds community. So lots of things happening in our world and be in honor of this new acquisition. This episode is a very special one.

[00:00:54] We're gonna get down to the bare brass tacks of what it really means to be a commercial attorney, and we're gonna talk about the fine print that really matters in a contract. I'm gonna guess that this episode is a really gonna be meaningful to all of our contract community, so I cannot wait to share it with you all.

[00:01:14] Now, I am not alone on this episode. I have two very fantastic and experienced guests when it comes to all things commercial agreements. My DocuSign co-host for the day is one of my very special friends and favorite colleagues here at DocuSign. Fun fact has been working at DocuSign for about a decade, which I joke with her is about 70 years in technology company years. Folks, please welcome Natalia Jhaveri to the show. Natalia, how are you?

[00:01:48] Natalia: Hi Jessica. Thanks so much for having me. Really excited to be here, to talk to you and to Laura today. All things nerdy contracting. It'll be great.

[00:01:57] Jessica: When I thought about this episode, there was no one more nerdy about contracts than you Natalia here at DocuSign. And I, and I say that that is the biggest compliment, by the way, coming from me in this community.

[00:02:09] Natalia: Oh yeah. I take it as a compliment. That's exactly what it is. We are all contract nerds at heart. I think you have to be to do what we do, right, and to still be doing it after all these years. So it's great.

[00:02:19] Jessica: Yes, for the guests who are listening and getting to know you, Natalia, can you give us a bit about your background?

[00:02:25] Natalia: Yeah, happy to. I've been at DocuSign for almost 10 years, like you said. I currently run our team that supports commercial contracting for North America, Latin America. As well as Asia and Australia and New Zealand. We do all of our contract negotiations with our customers when they're purchasing DocuSign.

[00:02:42] We also run our templates, making sure that our templates and playbooks are up to date and critically play the role of Customer Zero or DocuSign at DocuSign, where we are using our own tools, our. IEM product, for example, we're testing, we're providing feedback, and we're using those tools in our negotiations, which helps us every day be more efficient and successful, and get our teams to close their deals quicker, which is great for everyone, right?

[00:03:07] Jessica: Absolutely. All right. Well, Natalia, I know that you've met her in our preparation calls, but for our guest here, you are in for a special treat. We have another contract nerd on this episode, and not only does she work on contracts in her day job, she is a regular guest co columnist for Contract Nerds itself on, I think her blog is one of, or the most popular blog on contract nerd.

[00:03:35] She is a record setter award, sorry, award-winning record setter. She is actually proudly, folks showing her Clausey the new name of the contract Nerd mascot on the video screen because she is a winner. Folks, please welcome to the show, Laura Belmont, general counsel of Civis Analytics. Laura, how are you?

[00:03:57] Laura: Hi. Thank you so much for having me and, and Clausey my award for that lovely introduction. I, you know, I love the Contract Nerds community, so I'm so excited also as a DocuSign customer for your taking on this community, and I'm so excited for, for contract nerds and for Docus.

[00:04:16] Jessica: Absolutely. This has been everything coming full circle. I couldn't think of a more relevant and actually a high value speaker for this episode than you. You're a contract nurse writer. You're a a regular speaker on all things contracts and ai, and you're a DocuSign customer. I was like, win. So it's so great to have you.

[00:04:39] let's get to the episode. Um, I know that's pretty meaty. And one of the things that we hear often is I go to a lot of events and we go to a lot of COEs, and they don't learn, they learn a lot of high level topics when it comes to contracts. So I'm gonna go straight to it.

[00:04:56] Like I'm on a date with the first date and I'm telling you that I wanna get married and have three kids. All right. Are you ready, Laura And Natalia?

[00:05:03] Laura: Maybe.

[00:05:04] Natalia: I hope so.

[00:05:05] Jessica: Okay. Okay. I hope I didn't scare you away with that introduction and thank goodness that I haven't had to go on a date for over 20 years now, so I would fail incredibly.

[00:05:14] So we're gonna jump in it and we're gonna start with a hypothetical scenario that I'm sure a lot of our contract nerd peers are thinking about all the time as they think of as they negotiate indemnification and limitation of liability clause. And that is a hypothetical scenario of, oh no, what happens if there's a major data breach that impacts my company?

[00:05:36] And not because it was my company's fault in terms of us not following our security protocols and policies, but because of a failure by a third party vendor. Now our company is facing millions in fines and litigation costs because obviously those costs are gonna far exceed any fee that you pay a vendor.

[00:05:59] All right, we're gonna start with you, Laura. You're gonna be as our special guest. I know you feel very special to be able to talk about this, this topic. Walk the listeners through, uh, your thought process when this happens. Of what do we do?

[00:06:13] Laura: Okay, this is, like a very triggering scenario for somebody who works for a company that is a data analytics warehouse and platform. So for me, when I knew this was going to be the topic, I was like, oh, this, I'm ready for this one. This is what, right, this is for me. One of my major concerns is we are entrusted as an organization to keep our customers data safe and confidential.

[00:06:39] And it feels slightly different when you're a data warehousing company than if you are a company that's doing all different things and has some data, right? You have credit card information, it feels like a different relationship. So I think the first thing, if we're talking about in the contract.

[00:06:54] Space that I want to talk about for people to think through is in this scenario we're assuming there's some sort of a, a breach or a security incident. And so the first thing I would say is you need to understand how your contracts with your clients and then your contracts with your vendors define breach or security incident. There's a pretty big spectrum in terms of how different organizations are writing this. If I am on the vendor side, I really want like an actual confirmed unauthorized access, right? Like I want to have gone through my investigation. I know the door was open, somebody came through and there was actual access.

[00:07:38] If I'm on the customer side, I might wanna know a little bit more, I might want reasonable belief of unauthorized access or suspected access. So the first thing that I would really wanna think about is how is this term defined in the contracts and is there a disconnect between the way that maybe my contract with my vendor and my contract with my clients define the term?

[00:08:02] Because what maybe, you know, they don't have to tell me. Maybe they didn't have to tell me something.

[00:08:08] Jessica: Hmm.

[00:08:09] Laura: Slow down requirement, but I was supposed to be notifying my customers earlier. So that's really the first thing I think we need to get a handle on is like, what does a security incident or a breach even mean?

[00:08:24] Jessica: Third party vendors obligations to your organization and your organization's obligations to your end customers. Natalia, since you've been doing this for a decade now, this is a, a common dance for you. It's like how I can, I'm pretty good at the running, man. I do it all the time. Still today. I'm bringing the nineties back.

[00:08:42] Natalia, how do you, this is a muscle for you now. Like, how do you rec, like what do you commonly see when you're negotiating these data breach definitions and data protection agreements for DocuSign.

[00:08:54] Natalia: Yeah. For us it's really about how. Do you standardize? Right? So it's taking what you define as a security breach internally. What does your security team say? It is talking to your players of these are the people that are gonna be responding to this breach, right? So you need to set not just what is the security breach, also the timeframes.

[00:09:12] When are you notifying customers of a security breach? One is your vendor notifying you of the security breach? I will be very candid. Like many companies, I think we see customers have lots of different expectations for what they want. So being really firm in what your standards are internally helps you then contract in a way that it, you can standardize or operationalize your processes, both with your vendors.

[00:09:37] So you say, okay, this is what our definition is. We push that out to our vendors and this is what our definition is for our customers. And having that standard helps you because at least for us, right? And many customers or. Companies like us, you're a multi-tenant. If there's a security breach, there's a likelihood that more than one customer is impacted.

[00:09:56] You need to be able to standardize how you do those things. And so when we contract, we think about that and we try very hard to make it so that we're not one offing what we're doing. It is much more of a standard operation.

[00:10:08] Laura: And I'll take that even further, Jessica, and say, it's also thinking about what might the legal requirements be in terms of, for something like this, a notification, is there a regulation that says that I need to tell consumers in a specific amount of time? Because then we might have all different timeframes.

[00:10:28] We're saying when the vendor was supposed to tell me when I was supposed to tell a customer, that could be a range across customers. It could be 24 hours, it could be 72 hours. It could be this undefined, immediate. Which I often see in these agreements of like, we need to immediately know if there was some sort of BRE breach, however it's defined.

[00:10:49] So it really is understanding all of those timeframes because unfortunately, like we're gonna have to operate under the tightest timeline that we've agreed to. And you're likely gonna be under that for, for pretty much all of your customers, right? If you have to prepare, prepare one notification, you should be ready to send that out.

[00:11:05] So it's really lining up those timelines.

[00:11:08] Natalia: There's a regulatory component to this too. I think building on what Laura said, for example, the I don't know how familiar you are with Reg sp, which just came out from the SEC, they require 72 hour breach notification from the service providers that gets flowed down from their, you know, their clients to the service providers to us.

[00:11:26] So. That's something you have to keep in mind is there's a shifting landscape for security breach notifications and not just in the us We see this globally, right. Coming down from the EU as well. So I think that's a, another component is you have to be ready to be agile in terms of, we might have to change up our program.

[00:11:43] We might have to find a way to make this work. 'cause we're legally required to.

[00:11:50] Jessica: Okay, so you hear that folks, definitions of data breach is really important and also keep an eye out in your notification obligations. I mean, Laura, are you seeing, um, what are you seeing as a common No. Notification period that folks are willing to compromise on and and accept because when I negotiated a lot of DD DPAs, I, I generally always pushed back on immediate and try to land on 72 hours 'cause folks weren't accepting without undue delay language, which is the language required by GDPR.

[00:12:25] Laura: Right. I was going to say 72 that I think people see as reasonable that with an understanding that the investigation of. Once somebody suspects a breach is going to start, right? Like it's not, it's from 72 hours from when you are kind of learning or suspecting that there was a confirmation. Um, because I think what you don't, what you need to watch out for is you don't want people to delay delay starting an investigation because they're trying to be cute with language.

[00:12:55] So we wanna really talk about, okay, when, when did we start really like researching and then confirming, I don't, I think we're tr moving away from immediate and undue delay. It's just they're not defined. Especially as we're looking at DPAs and people are talking more about personal information, there's way more specificity off, often driven by regulations.

[00:13:17] Jessica: there's an art there too. It's from the period of the suspected breach or confirmation of breach was also part of the, the language dance that I remember negotiating as well.

[00:13:27] Laura: Right, depending on whatever side you're on, right? If you're on the vendor side, you're gonna want something you're going on more time. If you're on the customer side, you're gonna wanna know earlier. So just keeping in mind whatever side kind of you're negotiating from.

[00:13:39] Jessica: Absolutely. Well, Natalia, what are you seeing? We're gonna jump to like our favorite clauses that I'm sure you negotiate all the time, and it's related to liability. So when it comes to indemnification and limitation of liability, what are you seeing right now?

[00:13:54] Natalia: I think, I mean like everything, customers wanna be protected. If there's something that. Happens. I think the issue is, is, there are different levels of that, right? Like we're not their insurance company. We are certainly here to help and take responsibility for our actions that lead to a breach or that result in a breach.

[00:14:11] Certainly we wanna be responsible for our vendors. On the other side, we expect our vendors to be responsible for their vendors, et cetera. So there is that component, but I think people want money and they wanna feel like they're protected in case the worst case scenario happens that they can go to their contract and say, I have unlimited liability for a data breach.

[00:14:30] So what we've been, we try very hard to do are things like super caps where we say, okay, what's a realistic amount that we could agree to? Can we tie it to what you were spending with us? Right. Did that you get back the money that you spent the benefit of the bargain as we all learned in law school. But do you.

[00:14:46] Find ways to make me whole and then to help me if there's remediation costs, notification costs, that kind of thing. So it's kind of a push and pull, and a lot of it is based on how big of a customer you are, right? If you are a giant customer, of course we are going to say, alright, we can look at increasing that liability.

[00:15:04] If you're a small customer that's not spending that much money with us, what's our risk level here? And our risk tolerance for liability. So we get that a lot, indemnity, I would say. A lot of it is more standard now. I think people are pretty reasonable, other than the occasional people who want first party indemnity, most of the time they understand, okay, there's certain things I want indemnity for where I really think I'm gonna get a third party claim if something happens.

[00:15:27] But you get into those direct damages and consequentials, I think you see a lot of people reaching for the stars in terms of what they want from their providers.

[00:15:41] Jessica: Yeah. Laura, anything you would like to add?

[00:15:45] Laura: I think we're also seeing now when we're talking particularly in the context of AI and some of these huge providers, it seems like at some point we're just kind of like, okay, we will take these terms. Right? I know I'm not going to be able to negotiate with open ai. Or with Anthropic or with Google on these terms. So I'm going to accept them. And when you look, some of the languages and the caps are like, you know, I'm looking even at metas, I think there was an indemnification language in their main agreement. And when I'm looking at some of the caps, it's like 200 Canadian dollars, right On limitations.

[00:16:23] So I think again. It comes down to that leverage question of if you're with a large organization and a, a smaller one, and it's really trying to understand your specific use case for every tool of like, what are the real risks here? I always say that coming back to a contract, like what, what are the most likely scenarios so we know how much we need to focus on these particular clauses?

[00:16:48] How are we using. Tools, how are we using software so we can know if we're willing to accept language that's not great for us, or if we kind of plug our nose and take it, or if we look for an alternative vendor.

[00:17:02] Jessica: Yeah. I like that, Laura, because I think we, when us lawyers, when we're negotiating deals, we, we think so much. Of obligations of the vendor and data use minimization principles, but there's also onerous, there's also burden on us as the users, like we can control that risk by what data we input and the various scenarios we use the data.

[00:17:22] And what I was thinking too, and we don't have to talk about it because none of us are insurance experts, is as a customer. If I'm not really sure if this vendor actually has the assets to pay me, write me a check or my client a check for $10 million or $3 million, or whatever the super cap is, I really look out for making sure they have sufficient and adequate liability insurance and cyber liability insurance and trying to get, become an additional insured on that policy. Uh, well that was a very thrilling introduction, ladies. I hope you found that a fun exercise. So we can get down to, um, our next segment, which is really digging down into some real world examples. Uh, uh, the title I called it on my notes was Case Studies in Cata Catastrophe. AI really helped with that labeling, by the way, or I'm not that clever.

[00:18:14] All right, so let's say you have clients, and we definitely do have those clients who are using various, uh, generative AI tools that brings risk of copyright claims against, um, a provider. And as you may have heard, then there, this is a real risk. All of these large language models are getting sued for various copyright claims.

[00:18:35] And recently, about a month ago. Anthropic just settled and agreed to pay $1.5 billion in a settlement for use of data that it was getting from a, what I call a Napster like database versus lawfully procuring the data from using the look from a, a library. All right, so let's say your clients are using an AI providers tool and, and the indemnification clause, remember we're talking about leverage now is pretty restrictive. It's limited to direct damages and their specific requirements on our clients that they must use the tool in compliance with the suppliers. Usage requirements, right, to get sufficient full protection. And what I'm seeing here is stuff like making sure the input is not intentionally trying to break the system, like writing in the input.

[00:19:29] I want to write a story about a boy who, who's a wizard, and it goes to a school like Hogwarts, et cetera, and then also then verifying the output to make sure it's not similar to something that has a copyright or trademarked, et cetera. What do you think? What are some, when this plays out in the real world in practice, which we all know is happening, um, what are your thoughts on how do we actually protect our clients in this scenario, knowing that we don't have the leverage against the large language models? And, and then I would love your thoughts on what are the key legal distinctions between the damages as a secondary question. Natalia, you mentioned it earlier. You, you, you spoke my love language. You, you actually thought about direct damages and consequential damages, which are very different kind of damages. All right. Natalia, do you wanna kick it off on your thoughts on this happening in the real world and, and, and, and thoughts?

[00:20:23] Natalia: Yeah, we see it every pretty consistently from our sophisticated customers who understand AI or who have great counsel on ai. I would say as a sub comment, that is actually really important because making sure you have your head around what are the risks of. Contracting with somebody who uses AI in their services or who's providing you with AI services is really important. We see customers all the time who don't they, they have no idea. They've not, they have no counsel on this, and so they lose sight of the forest through the trees, I guess you would say, um, when it comes to negotiating AI deals, but the litigation piece is huge. We, on our end, you know, we'll have customers who say, we want you to be responsible if something goes wrong.

[00:21:07] At the other end, we are using third party providers like Meta or like, uh, Microsoft and OpenAI, right? And they're subject to litigation. How do you protect yourself against that is tricky and is the person in the middle. You have to find a kind of give and take, right, of saying, okay, we'll take some responsibility, but for known issues where we know there's litigation.

[00:21:33] Maybe we don't take that responsibility. Maybe that's a use it your own peril kind of thing. And we have to figure out how to contract around it. So it's not an easy answer and one that I would say fits for every scenario, right? Like, yeah, it depends on where you're sitting in that, that string.

[00:21:54] Jessica: Hmm, I see. Laura, I know that you think about AI agreements all the time.

[00:22:00] Laura: Yeah. I'd say again, thinking about what your company's use case is, is really important here of if you're using one of these large models, are you doing it to generate content where you could potentially face one of these third party claims? Or is it mostly for internal use and operational efficiency? I really think about this also in terms of all the, the code generation, and that's where, if we're talking about the different types of damages, I'm thinking a lot about it. Because if we are using one of these tools to help us generate code and that code ends up being like a core part of one of our product features, I'm really going to want to start thinking about indirect damages more. Meaning, you know, so if we talk about direct damages, the those damages that are really foreseeable coming right out of a claim.

[00:22:49] So if we lost money, if we had bills we had to pay. But the indirect damages being more downstream. I know that if I'm using code and that's part of the core feature of my product, and we have to later rip the code out or pay a license fee to be able to have that code, and that's an an uncovered.

[00:23:12] Indirect damage, like that's where the big money is. So you really need to think about your use case and kind of map through what it would, what would it look like, what third party claim could there be, based on my use case. So I would know, all right, am I gonna have lost profit? Am I going to have lost business opportunities?

[00:23:30] Am I going to have a lost investment in a product that we can't launch anymore? Because there was one of these underlying claims that we had to deal with. So it's really like map that use case and then plug back in to think, okay, would, would that be covered by direct damages? Would that be indirect?

[00:23:48] Would that be consequential so we know where we can try to push back when we're negotiating an agreement?

[00:23:54] Jessica: That's a really good point, Laura, that the use case really is significant here. It's one thing to use generative AI tools to make a, a funny picture or earn a marketing campaign and easily walk that back. But what's embedded as a part of your product, that's a very different risk profile. So great point there.

[00:24:12] And before we even move on, Laura, you made a great point about aI and the beta language trap. Tell me more about that. Because I have written beta language in many technology agreements to carve out a lot of our strict requirements on expanded cap and even security. 'cause it's beta, it's used at your own risk.

[00:24:33] And as is, well, isn't like every tech company rolling out beta AI features, tell us more.

[00:24:39] Laura: That's, it's not, right, it's not something that's new to just AI contracts. Probably all of our software, uh, contracts have this where we say beta features or as is, we're assuming no liability for anything that happens. The problem that I see with AI really is. When we are going through the procurement process and approving a tool, we're looking at this, these terms of the agreement as is, right?

[00:25:04] We're thinking about, originally we were thinking about AI really as a chat bot, and we're saying based on that usage, yes this is, these are the terms that we can agree to, but then the company rolls out connectors, right? So where the chat bot is being connected through an MCP server to all of your underlying systems, if you turn it on.

[00:25:24] Or there's agents, right, that are connecting to all of these different platforms that really significantly changes the risk pro profile of the tool. But a lot of those when they're rolled out are still in beta connectors. Were in beta for a very, for months. And so if we are using those features when they're in beta, all this like beautiful language that we may have agreed to and worked really hard to get.

[00:25:51] Isn't going to matter. So that's also just, I don't know if we can get any carves out to beta language, but it's a point for all of us to, once we look at that agreement, remind our teams and flow that down to them, that like, if you're turning on this new feature, we need to see what's the production status of that feature before we turn it on.

[00:26:13] Jessica: Yeah, really smart call out. It seems like a contractual creativity there is, that written consent is required for certain high risk beta features and the consent needs to come from not, and it needs to go through the same IT security review and legal review if it's, again, a high risk beta feature.

[00:26:31] Because when a lot of us were writing those contracts, I wasn't thinking about AI and agents and connectors back in the day. I, when I say back in the day, folks, I'm talking back a few years ago, not like 10 years ago, that the world is changing so fast. 

[00:26:45] Laura: And it's just another plug too, of why, what we're seeing in the contract space that it's so important that we work closely with our other teams and our IT teams to know, right? Like, okay, maybe we can try to negotiate to make sure that things are set. To off before an IT administrator toggles them on.

[00:27:05] But we need to make sure that kind of your contract and your legal team is really aligned with whoever the tool owner is so that whatever you've negotiated and what you're talking through, they understand the importance of why we can't roll out this new feature yet, why we need to wait, or why it's something that even our individual users can't just connect to G Drive.

[00:27:24] Right? That maybe they need approval for that.

[00:27:26] Jessica: Correct. Correct. And maybe it needs to stay in a sandbox environment where there's, uh, low risk or dummy data versus a production environment.

[00:27:35] Laura: Exactly.

[00:27:35] Jessica: Alright, so limitation, liability. I know all three of us love that. And that is gonna be a clause that legal teams will negotiate probably till the end of time because one, the vendor will always wanna cap it and a customer will always want it to be expanded or uncapped until that, until there's ever finalization there, we will have jobs.

[00:27:56] So in terms of, I know all of us have negotiated super caps and usually, which is a multiplier of the fees paid in the technology world. In the, you know, in the widgets world where you're selling a good or service, maybe it's significantly more than a multiplier, like five or 10 x fee, annual fees paid. Um, what keeps me up at night is thinking of what, how, what if those, those caps don't actually.

[00:28:21] Get, they get litigated and they don't actually get enforced. What are you all, all seeing here of terms of creative carve outs or creative ways to get around those negotiated caps, if you've ever seen it? Natalia, do you wanna kick it off?

[00:28:37] Natalia: I don't know if I could speak to the litigation piece of it other than to say, I think the creativity matters, and this goes back to our initial conversation about security breaches and how do you define a data breach. What are you carving out in that super cap? If you're saying any breaches of your obligations under the DPA or under your security terms, or related to a data incident security incident, however you define it, you better be really certain of what you're agreeing to in there to carve out because.

[00:29:06] That could be a whole host of damages that might actually even exceed the cap. And then maybe you have the customer coming back and saying, well, actually it's broader than that, and here's why. And here's the definitions, and here's their obligations that they need to meet. And so I think it's about really narrowly tailoring.

[00:29:25] What you're carving into that cap or what you're carving out of your caps in general to make sure that you're really certain, like this is the narrow scope that they can recover under here. Um, so that you're not litigating the broader kind of more amorphous definitions. Right.

[00:29:42] Jessica: Hmm, smart. Laura, any thoughts? Laura?

[00:29:46] Laura: I'd say in terms of what I'm seeing and hearing about unenforceability, it tends to fall into a few different buckets. I mean, we talked a little bit about is it unconscionable, right? When we're talking about bargaining power, it's something that I'm very curious to see how this will play out with AI in the AI space. Was this completely qual bargaining power? Did we try? Were. The terms hidden in 12 different hyperlinks that we had to go through, that it would be incredible, like very difficult to find it, you know, I think unconscionability being one piece of it another one I'd say, is like gross negligence or willful fraud.

[00:30:27] I think you, you should try to have an out and a push or an argument there where you maybe don't have, a great cap or the cap is too small and if there is some like very significant wrongdoing that you can point to. And then another one, which again, going back to ai, it's everywhere. The public policy violations.

[00:30:47] I'll be very curious to see how that plays out. I think if you were agreeing to something, particularly in like a consumer protection space or in a more regulated area or maybe employment law, financial, if what you are contracting to can. Maybe, maybe conflicts with statutory obligations or seems like it goes against the spirit of those obligations.

[00:31:09] I think that's an area where you know, you're going to maybe have a little bit of a battle.

[00:31:14] Jessica: Yeah. , All really great points too. I, I was thinking, Laura, that it really comes down to policy and arguing public policy that agreeing to enforcing these strict cash. Provisions, it's just against public policy. And then, and then tying it back to gross negligence, willful misconduct or fraud. And like I, I was thinking back to the Anthropic case, and I'm sure a lot of language, large language providers where they did scrape the data and for its initial data training set and use data for not.

[00:31:42] A transformative purpose and they didn't actually procure it lawfully or pay sufficient license rights to it. That is gonna be a problem for a lot of LLMs. And so that is probably, but I'm not sure if the courts have been very clear, if that will be a carve out by, as a matter of policy. 

[00:31:59] Laura: And I'm curious to see how it plays out with transparency too. Because I think right now we as the customers of these large tools and frontier models don't understand, right? It's a bit of a black box as to what the, was, as we and as regulations are requiring more transparency, if transparency into underlying data sets becomes more, uh, front.

[00:32:25] We understand what they are like. Is that something that we could really come in and say like, oh, well we didn't know, right? Were we gonna have an obligation to, if a company is being transparent about what they're training on, do we need to understand that? Uh, before we can say like, oh, well you're right.

[00:32:40] We didn't know about that. Not that they're going to admit that they're scraping the entire internet and like stealing people's data, but if there are some violations, again, I think this goes to your point of how you use a tool. And like if we are at fault at all, you know, is it going to be that, you know, are we gonna see like a percentage of that fault?

[00:32:58] Or is it just gonna wipe away any sort of claim that we have?

[00:33:02] Jessica: Yeah, I, I bet we're gonna start seeing a lot of these negotiated contracts. And Natalia, you can correct me if I'm wrong in our DocuSign deals, but there's gonna be a lot of like language about, to the extent, to the extent it's DocuSign's fault or your client. Then's fault, then we're responsible to extent it's your fault client 'cause you misuse the service, then you're on the hook.

[00:33:21] So a lot of a very clear risk allocation as opposed to a black and white, you know, one party's responsible for all those kind of claims.

[00:33:28] Natalia: Oh yeah, we're definitely seeing that.

[00:33:30] Jessica: yeah. Well, good, because really each party creates the risk. So I'm glad, I'm glad to hear we're seeing that more of a, you know, to the extent type of obligations.

[00:33:38] What I haven't though seen though is actually, and tell me, correct me if I'm wrong, 'cause it's been about a, a minute since I've negotiated large enterprise technology deals, are do vendors ever ask customers to be sufficiently insured? Because I see that could be an interesting thing as well, huh? Oh, that could be an idea. Natalia for a playbook. You

[00:33:59] Natalia: know

[00:33:59] Jessica: insured. Well, you also create. Ah, idea

[00:34:03] Natalia: like it. Mm-hmm.

[00:34:05] Jessica: we know the customer comes in, like, I'm, I'm paying you $20,000, but they can expose the vendor to also millions of risks too, by their misuse or of, of the AI tool or introducing bad data or data that was not lawfully procured.

[00:34:20] Hopefully the folks at Open AI and anthropic are not listening. Do not listen to this show, because I don't want you to put that obligation on all of our customers, but

[00:34:28] Laura: One of hyperlinks already probably.

[00:34:32] Jessica: all right, let's move on to being a more proactive, uh, practitioner. So one of the things I would love to hear from you both is learning the best way. To get one of our clients to understand the importance of, of advocating for a more favorable indemnity language, liability language, or even just definitions of, of data breach, which is often overlooked by our clients who are just solely focused on getting the deal done.

[00:35:04] Any tips on how to cross-functionally collaborate with our partners on the business who are solely focused. Their one track mind. They care a little about the future and the future risk. Laura, you wanna kick it off with some tactics? I, you, you're very persuasive. I can see it. I can send it to the screen. 

[00:35:21] Laura: Well, thank you very much. Come work at my company and we'll see if that still holds up. Um. I would say I, I think training is always an answer at a company. I like to lead from a place where I do like to think that all of our employees are trying to do what is best for the organization and where we may not be doing our best in that space is often because they just don't know or don't have the information that they need to be empowered to have those conversations. So indemnification and liability are, are concepts in terms that I think are hard even for lawyers. So really explaining to our teams on the frontline of sales, of, of what they are and doing it and quantifying that, I think is incredibly important.

[00:36:11] Like if you say, okay. We're a data platform company and we're multi-tenant. Right. And if we were to have a breach and each customer came to us with this, right? And our insurance is only in the aggregate this, right? Like to really quantify it for them because they're just really difficult terms and they feel so just like.

[00:36:33] We don't have to worry about. That's only if something bad happens, right? Like let's focus on the most commonly negotiated the price and the payment terms, right? And that's obviously so important, but if it does happen, right, it's a huge risk. So I think quantifying it and really helping ground them in an understanding.

[00:36:53] But then I'll also say I always want to give them that out of like. We gotta talk to legal because for salespeople to be going back and forth on these terms, right? One there, somebody's coming in, they're gonna say, oh, my team said this. We can't negotiate that. And then it's like, oh, well my lawyer said this, and neither of them are fully kind of understanding of what the terms mean.

[00:37:16] I just always say like, offer for me to get on a call. Like, I'll get on the phone and I'll talk to them about this and let the lawyers talk because I want them, I do want them to focus on deal velocity and closing deals, and if that's taking time away from it, that's something that I, as the legal team wanna take on.

[00:37:35] Jessica: I love that Natalia.

[00:37:37] Natalia: I second that for sure. I'm a big fan of getting on calls and discussing things. I think as you kind of keep flowing that down, how do you negotiate with your customer or with your vendor? I think. It also involves a team effort, right? It's not just me getting on a call even to negotiate. I think it's if you're negotiating security components, bringing on your security team who can have that kind of face time with the other side to say, here's how we do things and why.

[00:38:02] Here's our philosophies and why. That really helps the other side understand, because sometimes, I mean, we've all been there, right? You don't even know sometimes on the other side what you're buying. If you're the lawyer, you're coming in cold. You get thrown into a deal and you go, I don't even know what this product does.

[00:38:17] So helping educate that customer, or even on the other side, that vendor as terms of what are your risk tolerances? Where are you concerned about the most? What product are they buying so that they know, oh, you're fully encrypted. You can't even see my data. Great. That helps me ease my finger. Oh, you have this certification, or whatever it is.

[00:38:37] I think that education, that training, like you said, Laura, really helps people feel more comfortable and you get to a place where you go, okay, I get it. Now we can come to a middle ground here and get us both what we want, right? Because that's what negotiation should be. It shouldn't be, I leave feeling like I got nothing.

[00:38:55] It should be, we need to work together going forward. So how do we do that in a way that everybody feels like they came out okay.

[00:39:03] Jessica: What I love about what you both have said is that, uh, it highlighted the importance of humanity. Of getting on the phone and depending on who we're speaking to, having a certain approach. So with our internal business clients, it could be communicating in a way that it makes it very clear to them that we are their partner.

[00:39:21] We want them to close, we want them to focus on deal velocity and et cetera. We're not trying to be blockers, whether it be ex an uh, opposing side, it'd be coming to the table with empathy, we realized this contract got thrown on a table, like thrown over an email for you with no contact. So let's just sit down, have a conversation, and let me just give you that context.

[00:39:40] I remember doing that for a lot of deals when I was the general council of PayScale, and really the council had no idea what tool they buying. They just saw a SAS agreement and then dipped it in track changes, and I'm like, well, hold up. Let's get on a call. Here's a tool that your client is buying.

[00:39:55] Here's why here is great. Why It's a matter of public policy. Because it was a compensation tool. It makes sure that your clients are paid based fairly and on data, and it really got them involved in understanding and really changed the course of the negotiation. So I love that, approaching it with humanity and uh, yes, it's gonna be something really hard for AI to do, which is great news for us.

[00:40:14] Natalia: Say good job security.

[00:40:16] Jessica: All right. Yes, little job security there. Long as there's humans on both sides and um, on all parts of the transaction, there is a need for us. Yes, maybe AI can do sort of the rote work or initial passes, but it's really us really to build that relationship rapport as tailored to the audience. All right.

[00:40:34] So Laura, I'm not sure if you've ever listened to a previous episode of In-House, but we like to end the show with a very fun segment that is very contract nerdy. Are you ready?

[00:40:46] Laura: I am.

[00:40:47] Jessica: Okay. So the last segment that we like to do is called Redline. Ha ha. I know. Red line, and basically it's, would you redline this thing out, like get rid of it, or would you keep the red line or keep it?

[00:41:01] Okay. All right. Would you redline, 'cause we're approaching the holidays. I cannot believe it's already mid-November. Right? Right. Would you redline Hallmark Holiday movies? Yes. Oh, why, what?

[00:41:17] Laura: I don't tell my mom I said that this would, we have nothing. It would be a sad holiday season.

[00:41:24] Jessica: There's such a, there's such a comfort, predictable, warm.

[00:41:27] Laura: So many other wonderful classics out there, but everybody do them, right? Do you?

[00:41:32] Jessica: Natalia, would you Red line, like those cliche holiday hallmark movies.

[00:41:37] Natalia: I would keep them, but I would probably add an edit to add, the Netflix ones, I feel like they really hit the spot in terms of cheesy, ridiculous, easy watches when you just wanna like sit on the couch with some popcorn.

[00:41:50] Jessica: That's true. That's true. Netflix has now is, is getting into the Hallmark holiday movie game, right? There's some, uh, all right, Laura, I need a recommendation for the audience. Which classic holiday movie is one of your go-tos then?

[00:42:04] Laura: Miracle on 34th Street, like the original and the newer one. There's just something, and I also love, uh, white Christmas.

[00:42:14] Natalia: I love that movie.

[00:42:15] Laura: It's just like a feel good. And then you can kind of fall asleep, fall, fall asleep during this, like the dance sequence part a little bit and then wake up. It's just a nice, like a, a little hug.

[00:42:26] Jessica: Aw, it is nice. All right, Natalia, what is a recommended go-to holiday movie for you?

[00:42:31] Natalia: Laura took mine. I was gonna say white Christmas also. It's a great classic. I, my mom and I watch it every year, have running jokes about it now. So I second it. Jessica, I think it means you need to watch it now if we're both recommending it.

[00:42:44] Jessica: All right, the second red line question. All right, Laura, would you red going to law school today.

[00:42:55] Laura: No. Go. We need, yes. I honestly, the conversations that are happening right now around the legal profession I think are really exciting and showing this great shift of understanding that lawyers are business people, that we have these really amazing and unique perspectives beyond just interpreting laws that we have a lot to add to our different organizations. And I love the use of technology and leaning into technology. So it feels like the profession in some ways is more open than it has been in the past to people with different sort of backgrounds. I know I went, 'cause I was like, I like reading, I like writing. But now we need to lean into technology. We need to lean into being business people. And so I'm, I hope that law schools will match that with curriculum. But I do think the legal profession is still like such an exciting one. There's so many different ways to practice law right now that it maybe just will take some time to find your way to find the right fit, but there's so many different opportunities.

[00:44:00] Jessica: Oh, I love that. And actually, you didn't know this, but last week I was at a summit called the Legal Tech Fund Summit, and I was pleasantly surprised to meet several folks who were on. From law schools who are thinking about incorporating legal technology into the curriculum at law schools, so you'd be delight to hear that.

[00:44:20] All right, Natalia, how do you feel about law school today? I know it's a bit more expensive than when we, when we went.

[00:44:25] Natalia: Yeah, no, definitely. I would say yes, keep, so go to law school if you really wanna go. I think there's sometimes people use it, sort of like you said, Laura, as this, I don't know what to do with my life. I'm gonna go to law school. And then they realize it's a lot more than they bargained for. But I would say.

[00:44:41] Go and then get practical experience through internships, through clinics, through mentorships, and finding opportunities to connect with real practicing lawyers so that you see. What the world looks like today and what they can do. We have great interns who have come through and I still talk to some of them and they do amazing things and they walk away going, wow, I didn't learn this in law school.

[00:45:03] I wish I had. And I'm like, yeah, get the practical experience as part of that opportunity. And I think you get a more realistic approach to what it actually is like to practice law today. So I say go, but make sure you're getting that experience too.

[00:45:18] Jessica: Two keeps folks for uh, going to law school, but keep those great thoughts in mind. Alright, well Laura, it was fantastic to have you on the show in house. I hope you had a great experience. Was it fun?

[00:45:31] Laura: So, yes, thank you. I was like, oh, we're already at time. We're just heating up with some of these questions. So thank you so much for having me.

[00:45:40] Jessica: Well, I know that we could probably banter for hours and hours, but if folks really wanna continue to hear your thoughts and hear from you, what is the best way to find you or reach you, laura?

[00:45:51] Laura: If you're not on like a soccer sideline or carpool, it might be hard to find me, but I am on, LinkedIn. It's Laura Belmont. And I love I hearing from folks and I love crowdsourcing ideas on what people are seeing and hearing, would love to continue conversations.

[00:46:09] Jessica: Fantastic. So folks too, Laura is too humble, but she is also a star columnist at Contract Nerds.

[00:46:16] So if you also want to hear and learn from Laura, follow her on LinkedIn. Look and read her great columns and articles on contract nerds.com. And when you attend our monthly webinars, Laura is often there engaging in the chats, and you can learn from Laura in the chats as well. And we'll probably have her as a guest in one of our webinars in the future very soon. Thank you Laura, for joining us. It's great to have

[00:46:39] Laura: you for having me. Thank you,

[00:46:41] Natalia: Thanks, Laura.

[00:46:43] music break

[00:46:43] Jessica: Woo. It was so great to have Laura. Join us Natalia, on this episode devoted to our favorite topic, contracts. Natalia, what are some takeaways that you learned from this episode that you're gonna carry with you after the show?

[00:46:59] Natalia: Well, I think Laura has so many great insights. I loved listening to her talk and so I'm excited to hopefully hear from her again in the future. I really loved her takeaways about. Working with your clients, educating them, because like she said, a lot of your clients have no idea what some of these concepts mean and how it actually impacts you, how it impacts their company.

[00:47:19] Educating them and working hard with them to kind of get on the same page was really helpful for me as a practitioner to think about and say, oh, I need to do that more often. 'cause I don't think my clients know what indemnity means and that's okay. They shouldn't, but let me help them get there. I thought that was a really great takeaway and one that I am certainly gonna use going forward.

[00:47:39] Jessica: Absolutely. what I learned from Laura is to remember about two traps. One, the trap about definitions. I think a lot of us in in legal are always so focused on negotiating favorable terms and indemnity and limitation of liability. That sometimes we overlook that the core thing that will drive liability and obligations are how are things defined?

[00:48:01] How is a data breach defined? How is a notification trigger defined, et cetera. And the other trap that I thought was really interesting that I don't think about as well, is the beta use language trap and how all of those restrictions could come into play even and, and, and control and then overcome all the great carve outs and super caps I may have negotiated for my client and the data, uh, the data protection language.

[00:48:25] So folks, in addition to great clients relationship skills, I really love learning from Laura some two key traps to look out for when you're negotiating contracts. All right folks. Happy holidays and thank you for joining us on this episode of In-House, the podcast for in-house legal professionals. If you haven't listened to them already, I highly recommend you listen to our other great episodes featuring exceptional and experience legal leaders like learning how to be an influential general counsel and learning how to think like a product lawyer and more happy holidays.