Silicon Sonnets Podcast Artwork Image

Silicon Sonnets

Zerna.io GmbH

Explore the captivating world of artificial intelligence

Episodes
Investigating Cultural Alignment of Large Language ModelsFebruary 21, 2024
Episode artwork
Large Language Models as Minecraft AgentsFebruary 15, 2024
Episode artwork
Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially FastFebruary 15, 2024
Episode artwork
COLD-Attack: Jailbreaking LLMs with Stealthiness and ControllabilityFebruary 15, 2024
Episode artwork
PoisonedRAG: Knowledge Poisoning Attacks to Retrieval-Augmented Generation of Large Language ModelsFebruary 15, 2024
Episode artwork
Prompt Design and Engineering: Introduction and Advanced MethodsFebruary 12, 2024
Episode artwork
Vulnerabilities in AI Code Generators: Exploring Targeted Data Poisoning AttacksFebruary 12, 2024
Episode artwork
The World of Generative AI: Deepfakes and Large Language ModelsFebruary 12, 2024
Episode artwork
Universal Jailbreak Backdoors from Poisoned Human FeedbackFebruary 12, 2024
Episode artwork
Pedagogical Alignment of Large Language ModelsFebruary 12, 2024
Episode artwork
When Large Language Models Meet Vector Databases: A SurveyFebruary 08, 2024
Episode artwork
Leak, Cheat, Repeat: Data Contamination and Evaluation Malpractices in Closed-Source LLMsFebruary 08, 2024
Episode artwork
Computer says 'no': Exploring systemic bias in ChatGPT using an audit approachFebruary 06, 2024
Episode artwork
Conversation Reconstruction Attack Against GPT ModelsFebruary 06, 2024
Episode artwork
GUARD: Role-playing to Generate Natural-language Jailbreakings to Test Guideline Adherence of Large Language ModelsFebruary 06, 2024
Episode artwork
I Think, Therefore I am: Awareness in Large Language ModelsFebruary 01, 2024
Episode artwork
RAG vs Fine-tuning: Pipelines, Tradeoffs, and a Case Study on AgricultureFebruary 01, 2024
Episode artwork
Generative AI enhances individual creativity but reduces the collective diversity of novel contentFebruary 01, 2024
Episode artwork
Weak-to-Strong Jailbreaking on Large Language ModelsFebruary 01, 2024
Episode artwork
Low-Resource Languages Jailbreak GPT-4January 30, 2024
Episode artwork
Evaluating Gender Bias in Large Language Models via Chain-of-Thought PromptingJanuary 30, 2024
Episode artwork
An Empirical Study on Usage and Perceptions of LLMs in a Software Engineering ProjectJanuary 30, 2024
Episode artwork
Security Code Review by LLMs: A Deep Dive into ResponsesJanuary 30, 2024
Episode artwork
Investigating Hallucinations in Pruned Large Language Models for Abstractive SummarizationJanuary 30, 2024
Episode artwork
Fine-Tuning or Retrieval? Comparing Knowledge Injection in LLMsJanuary 29, 2024
Episode artwork