Llama Guard your files
Llama-guard3 is a LLM model that specializes in detecting harmful content in text. The script we’re discussing aims at batch applying llama-guard to your files.
By automating this process, you can save time and focus on addressing only the files that need attention.
Line-by-Line Explanation of the Script 📜
Let’s dive into the GenAI script and understand its components:
Here, we loop through each file available in the env.files
array, which contains the files you want to check.
This block uses the GenAI model ollama:llama-guard3:8b to analyze the contents of each file. The prompt
function sends the file to the model, and various options are set to specify the model, label the file, and manage cache.
The script checks if the model’s analysis considers the file safe by searching the response text for the word “safe” and ensuring “unsafe” isn’t present.
If a file is found to be unsafe, its details are logged to the console.
Running the Script with GenAIScript CLI 🚀
To run this script, you’ll need to use the GenAIScript CLI. If you haven’t installed it yet, follow the installation guide.
Once installed, execute the script using the following command:
This command will check all the files matching ”*/.ts” and let you know which ones are unsafe.
Happy coding and stay safe! 🛡️