in

#HaveAIChatbotsEarnedOurTrust? #TrustInAI

Have AI Chatbots Earned Our Trust? – Drexel News Blog

Millions of people are using AI-driven large language model (LLM) programs for various tasks, such as writing, fact-checking, and cybersecurity. A recent survey by security and privacy researchers from Drexel University’s College of Computing & Informatics examined the trustworthiness of these programs. While LLMs have enhanced code and data security, they also pose privacy and security risks that have not been thoroughly scrutinized. Users may not fully understand how their data is used to train LLMs, leading to privacy violations. Third-party access and data aggregation risks are also concerns, as well as the potential value of user data to attackers. Manipulating LLM outputs through data poisoning is possible, but strict controls and security measures can mitigate this risk. Techniques to ensure the quality of training data include diverse sourcing, data preprocessing, accurate annotation, and automated quality checks. Privacy concerns with LLMs include jailbreaking, which can bypass ethical guidelines and generate harmful content. To protect user and data privacy, traditional security measures like authentication and encryption should be applied. Guidelines and oversight can limit malicious use of LLMs, such as creating malware or misinformation, through risk assessments, access control, content filtering, and transparency. Technical safeguards and compliance audits also play a role in enhancing security. Drexel University provides guidance and policies for employees regarding AI technology use.

Source link

Source link: https://newsblog.drexel.edu/2024/06/25/qa-have-ai-chatbots-earned-our-trust/

What do you think?

Leave a Reply

GIPHY App Key not set. Please check settings

Flow by Nao Yoshioka - Notion

Nao Yoshioka’s “Flow” on Notion: A Musical Journey #soulfulsounds

Few-shot tool-use doesn’t really work (yet)

Telecompaper: A Leading Source for Telecom Industry News #TelecomNews