Silicon Sonnets Podcast Artwork Image

Silicon Sonnets

Zerna.io GmbH

Explore the captivating world of artificial intelligence

Episodes
What's in a Name? Auditing Large Language Models for Race and Gender BiasMarch 04, 2024
Episode artwork
Gender Bias in Large Language Models across Multiple LanguagesMarch 04, 2024
Episode artwork
DrAttack: Prompt Decomposition and Reconstruction Makes Powerful LLM JailbreakersMarch 04, 2024
Episode artwork
Benchmarking Large Language Models on Answering and Explaining Challenging Medical QuestionsMarch 04, 2024
Episode artwork
OpenMedLM: Prompt engineering can out-perform fine-tuning in medical question-answering with open-source large language modelsMarch 04, 2024
Episode artwork
Investigating Cultural Alignment of Large Language ModelsFebruary 21, 2024
Episode artwork
Large Language Models as Minecraft AgentsFebruary 15, 2024
Episode artwork
Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially FastFebruary 15, 2024
Episode artwork
COLD-Attack: Jailbreaking LLMs with Stealthiness and ControllabilityFebruary 15, 2024
Episode artwork
PoisonedRAG: Knowledge Poisoning Attacks to Retrieval-Augmented Generation of Large Language ModelsFebruary 15, 2024
Episode artwork
Prompt Design and Engineering: Introduction and Advanced MethodsFebruary 12, 2024
Episode artwork
Vulnerabilities in AI Code Generators: Exploring Targeted Data Poisoning AttacksFebruary 12, 2024
Episode artwork
The World of Generative AI: Deepfakes and Large Language ModelsFebruary 12, 2024
Episode artwork
Universal Jailbreak Backdoors from Poisoned Human FeedbackFebruary 12, 2024
Episode artwork
Pedagogical Alignment of Large Language ModelsFebruary 12, 2024
Episode artwork
When Large Language Models Meet Vector Databases: A SurveyFebruary 08, 2024
Episode artwork
Leak, Cheat, Repeat: Data Contamination and Evaluation Malpractices in Closed-Source LLMsFebruary 08, 2024
Episode artwork
Computer says 'no': Exploring systemic bias in ChatGPT using an audit approachFebruary 06, 2024
Episode artwork
Conversation Reconstruction Attack Against GPT ModelsFebruary 06, 2024
Episode artwork
GUARD: Role-playing to Generate Natural-language Jailbreakings to Test Guideline Adherence of Large Language ModelsFebruary 06, 2024
Episode artwork
I Think, Therefore I am: Awareness in Large Language ModelsFebruary 01, 2024
Episode artwork
RAG vs Fine-tuning: Pipelines, Tradeoffs, and a Case Study on AgricultureFebruary 01, 2024
Episode artwork
Generative AI enhances individual creativity but reduces the collective diversity of novel contentFebruary 01, 2024
Episode artwork
Weak-to-Strong Jailbreaking on Large Language ModelsFebruary 01, 2024
Episode artwork
Low-Resource Languages Jailbreak GPT-4January 30, 2024
Episode artwork