Video: Live Lab: Screens vs. Prompt Lab - Which one should you use (and when)? | Duration: 1956s | Summary: Live Lab: Screens vs. Prompt Lab - Which one should you use (and when)? | Chapters: Welcome and Introduction (34.27s), Screens vs. PromptLab (124.32s), Current Software Limitations (255.395s), PromptLab Pros and Cons (372.38s), Demo and Comparison (562.735s), Comparing Contract Features (1389.41s), Clause Deviation Identification (1533.75s), Legacy Contract Ingestion (1650.39s), Metadata Update Considerations (1812.675s), Conclusion and Farewell (1902.78s)
Transcript for "Live Lab: Screens vs. Prompt Lab - Which one should you use (and when)?": Hi, everyone. Welcome. Let's give people about thirty more seconds, and then we can go ahead and start. Alright. So hi, everyone. My name is Olga. I am here today to tell you about screens and and which tool is the best for the job. Before I go into pros and cons of each tool, I wanna take a moment to tell you about the things that they have in common. Both of them have LMs behind them, large language models. And as the technology improves, the back end will improve as well. It doesn't mean that the front end will change, but, of course, with new features, that will happen also. This is really nice because you can have the same interface you're used to and yet get the improvements in technology as it develops. So the main difference between screens and PromptLab is that, of course, ProgLab has a significant interface. I'm gonna talk about pros and cons of each tool, and then I'll give you a short demo of the pros of each tool. With screens, I think the best part of it is that it uses existing EdgeLock configuration. This means that once you have a related table established, once it's on the layout, there's really nothing else you need to do just to see the data. This is very useful if you have multiple users that are interested in AI, want to create their own screens, want to run AI and ask different questions based on contract types or even their particular. Visual representation in Word documents is great. I think it's really helpful to be able to click on a on a I'm sorry. My dog is click on a piece of interface and travel to the part of the text in the Word document. It is very helpful in terms of searching, especially if you're working with a large document. It will take you right there instead of having to scroll or use control f, especially on topics that are not necessarily spelled the same way. AI will understand the semantics where your plain search will not. Interactivity is great. If you're looking to Redline, it is something that is extremely helpful and PromQL cannot do. And, of course, you do not have to be an admin to access it, to review it, and update anything that may need an update as the time goes on. This becomes really important when you have, again, multiple people because regardless of your admin availability, when multiple people want to test and they've seen this, it's it's great that they have access on their own and don't have to reach out to the admin every time. Now with the cons, we're looking at our, obviously, current cons. It's it does the software is always improving, and we're getting better at everything we do. But at the moment, we do not pull data from Angeloft Hills. I will show this in the PromptLab live demo to to give you an understanding when this is useful. But it is not great. If we wanna compare data in the record and we have an agreement open in front of us, screens will not do that for us. Now it's not something we necessarily wanna do all the time. So if we're looking at negotiations, screen still works great. But what I will show in the demo is presignature check where we wanna make sure that data in the record compares correctly to the data in the agreement. The other thing to remember is you're, at the moment, working with a singular document. So if you have, let's say, MSA and SOW that accompanies it, you cannot check both end screens. For nonstandard data, it can be really difficult. So if we have multiple pieces of information on the same data points spread across the across the agreement, it is easier to instruct PromptLab to grab them all. With screens, it still can be possible if it's relatively consistent from data point to data point, but it does become more complicated to set up. Last but not least, at the moment, when you run a screen or screen action from Agiloft, your document has to be in the attachments table. So if you have a custom setup where you have multiple tables with file fields and you wanna run from different tables, at the moment, you won't be able to. It will only be able to do it from the attachments table. Now let me move on to the prompt lab. I wanna talk about the same thing, pros and cons of this tool before we start the demo. I really like that we can use data from Agiloft for prompting because we can grab field values from any record. We can also use embedded safe searches, which means that you can grab the up to date search every time you run a prompt. And you can use agile formulas, which is not something that you do all the time, but it can be useful when you're setting up a complex prompt to make sure that you're grabbing current data. The other thing, of course, is that you can use multiple inputs. You can add more than one document, but you also can add document and fields from records. So when I'm demoing the presignature check, I will show you just that. And it works in any table. So if you have something that has actually nothing to do with the document at all, you have multiple fields in a table and you would like to ask questions, if you like to infer something, you can do it without having to move it into attachment or contract or any other team. The downside of PromptLab is the setup. Really, it's it's the most important complication. Again, if you have multiple people that are interested in AI and are trying to set up things, they will either all need the licenses or you will have an admin that will have to continuously support them in the process. You also need an output fields for your output text from. If you are interested in parsing it into different fields, you will need update actions. And, of course, you need a trigger. Whether that's a button or a rule is up to you, but all of that needs to be set up in a certain way. You can't just create a Gen AI action. However, you can't just come in and create a screen. So there's a lot more that comes into the prompt lab, and I recommend going with screens unless you require the flexibility from the pros in the slide. I've covered the admin access. It's it gets tricky. It's an action like any action in Agiloft. So if you don't have admin access, you can just go and start creating your template with with screens. As I said, you can just go in and start working right away. And, of course, there's no prebuild interactive interface for results, which can be a good thing if you're looking for something specific. You can build your own within the record. There's a lot of flexibility there. You can create as many fields as you need. But at the same time, if you're looking for a quicker solution, faster testing option, something that feeds into existing interface, again, because we do care about the number of fields we have in large tables like contracts. CompLab is more complicated to put all that together in a way that makes sense, and all of that falls onto the admin. So I'm gonna move to the demo examples, but I wanted to say that if you guys have any questions on the pros and cons here or anything in the future, we have a q and a tab in in the webinar, so please feel free to ask your questions. And let me go ahead and show you the screens first. I've prerun everything because I've had a situation where I needed to do a demo last year, and then we had a AWS outage. So I wanted to make sure that I can show you everything I would like, and I'm going to start with screens. So I already ran this. Again, it's very simple. All of my data comes into this table. No matter how many screens I run, I will see it here in my contract record. I don't require it to build anything. It's already available to me. Of course, I can adjust my view if I want something different here. I am totally capable of doing it, but I don't have to create new fields, new related tables. I will see it every time. So what I would like to do here, I didn't create a complex screen because for our customers, most of you, I'm certain, have already used them. If you have not, please go check out our community screens. There is so much there that you can look at and and see if it works for you. Here, I just wanted to show the difference in how data is used between the two different tools. So when I get all my data into here, as a human, I can see it right away. There is no automation that comes with just simple screen questions, but it is readable and available. When I am in Word, as mentioned, I can see my beautiful screens interface. I've seen the screens I've made before, so I can go ahead and click here to get my result. And I can see my very convenient ways to travel between the documents. I can just see what my start date is if I click on the the element, and I can keep traveling, let's say, where's my where's where's my my where's next area? Now for the screen itself, if you have used the community screens that you haven't built your own, I do wanna show you how simple it is. And and pull up in the end. Just a moment. It really does not require anything more than a very simple question if your question is that simple. Of course, not everything is going to be this way, but it allows you to bring a, piece of data from the agreement into Agiloft and be able to view it without opening the Word document, without having to do anything else? This is just simple questions. They mimic the demo for the presignature check I have in the Chrome Cloud, so I don't wanna spend too much time here. However, I wanna show one very quick thing. I did ask for a contract amount on a contract that does not have contract amount. In the question details, I can specify the type of output I would like to get if that is not available. And you can see that in the outputs here as well, which I think is very convenient. You don't have to worry about a hallucination display. It's much easier to ask AI to output something than nothing and also gives you an immediate answer. Contract amount is not in this agreement. Now when it comes to promval, what I did here I I will spend a bit more time on that because, of course, again, prom form requires more setup. I did create a presignature check-in a table. Now this is kind of cheating because what I did is I created a singular text field, which I've set up as HTML so that I can use the table and colors and have an ease of viewing. And what I did here, I've pulled in record values, document values, and the status. This is very simple to set up and a great visual queue, and you do not have to know HTML for that. So let me show you the prompt here, and I can go over the prompt in a bit more detail as it it does. Unlike screens where you can just easily have a single sentence. Here, you actually have to do your prompt engineering and due diligence and making sure that you're describing everything to the extent you're looking for. We typically prepump. This helps a lot to to set up a persona for the prompt, and we're obviously asking AI to do something. I wanted to really show this because I want you to see that if you're looking for a visual element, which can be tricky to set up in Agileoft without creating multiple fields, it's actually quite easy. You do not have to know HTML. I've simply described what I'm looking for except colors because I'm thinking about colors. I can use a color picker and put the number in. But if you just say blue, that will work as well. I told it what I would like to see in the status row. I told it to create a new row based on my headers and my key terms in the record. So I've told everything that I would like to see, and then I've added my key terms and records. So as you can see, what I'm doing here is I am pulling contract title from the record. What this allows me to do is to say, hey. My agreement title in the record is different from my document value. Now, of course, for something like this, we probably don't care. It's a good example of something you can say, well, it's a mismatch, but I'm perfectly happy with this, and I'm not concerned. However, when it comes to your counterparty, of course, you want to be certain that the company attached to this record matches the company in the agreement. So if we go to setup, I wanna show here that each of the items I'm looking for, except the agreement title, something very self explanatory, has another set of instructions. This is similar to what I've done with the the screen's situation where I've asked it to output not available to to give it a bit more direction. So here, I I want counterparty. I in in my example, I only have one internal legal party, so I'm just letting it know that it is not pineapple playgrounds. It is the other one. For contract type, I'm asking to infer. I've anonymized this agreement with the fruit theme, so, of course, it does not have a master orchard agreement in the system, which I'm getting a mismatch as well. This could be a good conversation of whether this is okay or we need to add a new contract type to our system so that it reflects the contract types appropriately. Now when it comes to Evergreen, things become more complicated, and this is where our descriptions become more and more complex. Same is going to be true for screens, so this particular example is applicable for both. In in this particular agreement, I do not have an explicit term, which is a great example because if it just says, you know, it's evergreen, it's very easy to infer. But here, I I wanted to explain what I'm looking for and where to look and how to interpret. So I'm letting it know that I would like it to look at term and termination clauses. I am providing a list of items from, the choice list that we have in. And then within the instruction, I have another instruction. And so I'm looking at our renewing, evergreen, and one time. Now let me jump back to the agreement really quick so you can you can take a look at the term here. It commends on the effective date and is in full force until terminated. Based on my descriptions, GenAI infers it as evergreen. Because I have evergreen in in set up in my record here, AI infers Evergreen. I am seeing a match despite it not stating Evergreen anywhere. There is no contract amount here. So same way, I've asked it to output not available. We have a mismatch. And then last but not least, the part that we really care about is things like small details. It's clear that there's a typo either in the document or in the contract start date. Checking a presignature allows me to correct this whether it's in one or the other before I send it out. So this pre signature check is where, in my opinion, PromptLab is a very simple window when it comes to understanding which tool you would prefer to use. As I pull my contract start date with the the field label into the prompt, every time I run this, I pull from the record my button is in. So my data here always come from the comes from the appropriate record. I get an opportunity to review two items at once without having to click and manually check. Screens did, if you remember, bring the same information in, but I would need further automation to compare this value to the value in the the contract start date. In some scenarios, this is still a better solution because if everything you set up is in screens except this one thing, for maintenance purposes, it may be better to continue working in screens and then selecting to create extra configuration within Agilent to keep all your AI together. But in others, if you're mostly using PromptLab, the same is true. There's a lot of things you can do in PromptLab that don't require a direct interface like the Word add in that you can do with screens. And if most of your prompts are there, it again makes sense to keep them together in order to have easier maintenance over time and know where everything is. And then the last one I set up is that if if both data points are not available in this scenario, there is no contract end date. We will see that it's still okay even though we have nothing to actually compare to. So I didn't let me go back to the pros and cons really quick. I didn't show any embedded safe searches because we only have a couple minutes really for this demo. But a great example for that would be if you have multiple internal legal entities and you don't wanna just specify one name, you can use an embedded safe search, which means that it will pull data automatically every time you're moved upon. So if you add your internal legal entities often, you don't want to spell them all out in your prompt. You want to put them you wanna create an embedded safe search that searches on the company table for the internal legal entity role, and then you can put that embedded search into, the prompt the same way you can any field in Agiloft. So that's pretty great. The other thing I'm doing here is I'm pulling the document itself, and I am pulling data from the fields, which allows me for that check. This works really well when you're doing a lot of inferring on data that is already in Agiloft and not within the agreement itself. But, of course, to do this, I had to create my text field. I had to create an action. I had to create a button. I had to put it on the layout. So unlike screens where I can just create a screen, run it, and see my outputs here, I am putting more work in order to create a place for the prompt where it works really well. So this is pretty much where the demo ends, and let me see if we have questions. Okay. So is either tool or both included in the new AI on the on inside version of results? Screens is available to everyone, and so is the prompt lab. But it will depend because you guys are existing customers. It will depend on where you're at in the AI activation project. Screens tables need to be this deployed. Sorry. But there is more to the infrastructure, especially if you're looking into obligations. PromptLab can work immediately because you're building everything yourself. So depending on on your particular situation, one may be faster than the other, but both will be available. Okay. Let's see. You would like to build a feature that compares contract to previous contract from the same category. You would want to do that in the PromptLab. Now there is a way to do that with screens as well. If you have your most important data point extracted, you have data from screens available in Agiloft. With that, you could use PromptLab to compare exactly particular screen run to to particular screen run, or you could load two documents at the same time and ask your questions. There's a lot of ways to set this up depending on how the outputs from your previous AI runs are set up in Agilent. It also really depends on the next step. So if we're only comparing and we're outputting something similar to this where we're saying, oh, there's a mismatch, it's simple enough to do. But if we want further informant where we wanna understand our risk, if we want some sort of advice, then we definitely want to use PongLab. It's much easier to let it give you freestanding answers where screens is designed to really work with the agreement. And if you're generating something that's not in the agreement, it's much easier to do that with PromptLab. It doesn't have that useful interface, which also gives it a bit more flexibility. I hope this answers your question. Let's see. Can any of them identify deviation of clauses? So that's a good question. If you have clause library set up with your preferred positions in records in the clause library, it's fairly easy to use PromptLab to do that because you can pull extracted clause language, which you could extract with PromptLab or screens, but you would pull it into the PromptLab along with the clause from your clause library. Remember, that's one of the pros for the PromptLab that you can pull data from any record. As long as there's a link, which is your contract record, you can say, okay. This is my extracted clauses from this contract. This is my clause from the clause library, and you can ask PromptLab to compare them based on certain positions. So you do wanna know what you're asking here as well. So let's say if there's a particular thing that you never accept, it's fairly easy to say, hey. Like, we we never do exclusivity. You don't even need the other clause to compare. But if you're looking at particular concept that is more complex than that, PromptLab works quite well because, again, it allows you to pull two separate records into it and have a conversation about comparison wherein screens, you have an opportunity to talk about your preferred language, but it will allow you to redline rather than give you necessarily an easy way to review the output. It doesn't doesn't give you that suggestion into possibly a different field, where here you could create anything with any output you get from PromptLab. Okay. For legacy contract ingestion related to acquisition, what is the currently recommended method? I do not have this is a really good question, Jennifer, but it is a big question. So it really is dependent on your particular project. What is the best way to ingest contracts that you are receiving from acquisitions. I like legacy bulk AI import because it gives you a place between nothing and the contracts table. It allows you to have these documents in review before you publish them into contracts table and make them final. There's also nothing that is stopping you to from adding PromptLab prompts to that table. Remember with screens, we're married to attachments, but we're not with PromptLab. So if you're seeing that out of the box labels are not sufficient, which will happen on rare contract types, You can add GenAI there as well to extract data for you at that stage. Now, alternatively, if you already have them created as contracts and attachments and you're looking for more, which typically means that you have the metadata, like the title and the counterparty and amounts that you're looking for clauses. You could do either or. Screens can be run as a screen action in Agile, so you don't have to open the word the word add in. This is particularly relevant when you're talking about legacy because you are going to be dealing with PDFs, so you can't really do that anyway. But you can create an action that will run screens, put it in a button or in a row, same as you would do with PromptLab, and then see your outputs in a similar fashion to this. I will say that the longer chunks of text you're looking to extract, the trickier it gets with screens because it is designed for precision rather than bulk. So if you're looking to extract with a huge three page clause, I would definitely start with ChromeGram. The good news here is that because both of these are GenAI, even if you start in PromptLab or in screens and then you're seeing that that particular tool is not the best for the job, you can go ahead and move what you've done so far without losing everything because the way you ask questions, the data points you're mentioning are all going to be the same. Let's see. You can update metadata details automatically. It is a rule you would need to set up in AgileLoft. So just because something comes in through AI doesn't mean that it's not text. That's all there is to it. Before AgileOFT itself, it doesn't really matter whether you're setting up an AI extraction or you're doing typing into these fields. So imagine that for some reason, you've put data in a different fields, and now you're just running an updated action to update the main fields. What I will say that anything to do with AI, having human in the loop is important. So you may not want to update things automatically if you're not certain that the outputs from AI are true. A lot of times, it's better to set up a interim field like this one where you can see that there actually is a mismatch and something should be updated instead of automatically running it. Because if, let's say, you which happens a lot. If if your counterparty is spelled a bit differently in in the agreement versus what you have in the company table, that can cause trouble if you're updating automatically. It could create duplicate company or relink it to the one you actually didn't want to link to based on human knowledge, not on something that AI knows. Okay. I hope this was helpful. I really wanted to point out the differences and what types of information is used in prompt lab versus in screens, and I hope I was able to answer all your questions well. Some of them are quite deep. I do say that within the implementation, you will get much better answers because we can see your KB. We can see what has been done, how to add AI in the most meaningful way without spending a tremendous amount of time on configuration, and just have a conversation rather than a monologue from from one side. Thank you, everyone, and I hope you guys have a nice rest of your week.