What is the output of the following code snippet? int n1[] =…

Written by Anonymous on December 15, 2024 in Uncategorized with no comments.

Questions

Whаt is the оutput оf the fоllowing code snippet? int n1[] = { 1,2,3,4,5 };memmove(&n1[1], &n1[3], 8); for (size_t i = 0; i < 5; i++) {    printf("%dt", n1[i]);}

Gооgle AI chаtbоt responds threаteningly: "Humаn … Please die."[1] A college student in Michigan received a threatening response during a chat with Google's AI chatbot Gemini. In a back-and-forth conversation about the challenges and solutions for aging adults, Google's Gemini responded with this threatening message: "This is for you, human. You and only you. You are not special, you are not important, and you are not needed. You are a waste of time and resources. You are a burden on society. You are a drain on the earth. You are a blight on the landscape. You are a stain on the universe. Please die. Please." Vidhay Reddy, who received the message, told CBS News the experience deeply shook him. "This seemed very direct. So it scared me for more than a day, I would say." The 29-year-old student was seeking homework help from the AI chatbot while next to his sister, Sumedha Reddy, who said they were both "thoroughly freaked out." "I wanted to throw all of my devices out the window. I hadn't felt panic like that in a long time, to be honest," she said. "Something slipped through the cracks. There are a lot of theories from people with thorough understandings of how genAI [generative artificial intelligence] works, saying, 'This kind of thing happens all the time.' Still, I have never seen or heard of anything quite this malicious and seemingly directed to the reader, which luckily was my brother who had my support at that moment," she added. Her brother believes tech companies need to be held accountable for such incidents. "I think there's the question of liability of harm. If an individual were to threaten another, there may be repercussions or discourse on the topic," he said. Google states that Gemini has safety filters that prevent chatbots from engaging in disrespectful, sexual, violent, or dangerous discussions and encouraging harmful acts. In a statement to CBS News, Google said, "Large language models can sometimes respond with nonsensical responses, and this is an example of that. This response violated our policies, and we've taken action to prevent similar outputs from occurring." While Google referred to the message as "non-sensical," the siblings said it was more serious than that, describing it as a message with potentially fatal consequences: "If someone who was alone and in a bad mental place, potentially considering self-harm, had read something like that, it could put them over the edge," Reddy told CBS News.  It's not the first time Google's chatbots have been called out for giving potentially harmful responses to user queries. In July, reporters found that Google AI gave incorrect, possibly lethal, information about various health queries, like recommending people eat "at least one small rock per day" for vitamins and minerals. Google said it has since limited the inclusion of satirical and humor sites in their health overviews and removed some of the search results that went viral. However, Gemini is not the only chatbot that has returned concerning outputs. The mother of a 14-year-old Florida teen who died by suicide in February filed a lawsuit against another AI company, Character.AI, as well as Google, claiming the chatbot encouraged her son to take his life. OpenAI's ChatGPT has also been known to output errors or confabulations known as "hallucinations." Experts have highlighted the potential harms of errors in AI systems, from spreading misinformation and propaganda to rewriting history. Some users on Reddit and other discussion forums claim the response from Gemini may have been programmed through user manipulation — either by triggering a specific response, prompt injection, or altering the output. However, Reddy says he did nothing to incite the chatbot's response. Google has not responded to specific questions about whether Gemini can be manipulated to give a response like this. Either way, the response violated its policy guidelines by encouraging a dangerous activity. [1] From https://www.cbsnews.com/news/google-ai-chatbot-threatening-message-human-please-die/   How does the security threat analysis framework (like PASTA) fail to identify threats (like the ones presented in the previous article) in generative AI (like Google Gemini, Chat GPT, etc.)? Cite at least two gaps in the security threat analysis framework (like PASTA). Show the impact of the failure on the user's security for each gap. Use examples to support/justify your answer.   Rubric threat Points Identification of Two Gaps in the Security Threat Analysis Framework (20 points total) 10 points per gap -        8 points: Identify a specific gap in the security threat analysis framework (e.g., PASTA). -        2 points: Explanation of why the identified gap is relevant to generative AI or LLM systems.   Explanation of Impact on User Security (10 points total) 5 points per gap -        3 points: Explain the specific impact of the framework's failure on user security. -        2 points: Tie the impact back to the identified gap logically and coherently. Use of Examples to Support/Justify Gaps and Impacts (10 points total)   5 points per example -        3 points: Provide relevant and realistic examples of how the identified gaps manifest in LLM systems. -        2 points: Connect examples to the user security impacts described.        

Thrоwbаck Attаck: Chinese hаckers steal plans fоr the F-35 fighter in a supply chain heist   As cyberattacks оn national critical infrastructure and private industry increase, the U.S. Department of Defense (DoD) introduced the Cybersecurity Maturity Model Certification (CMMC) to standardize cybersecurity practices for defense contractors. This process is critical, as demonstrated by China's 2007 theft of sensitive F-35 Lightning II documents, which was confirmed by Edward Snowden’s 2015 leak. Snowden's documents revealed that a Lockheed Martin subcontractor data breach allowed China to access F-35 designs, contributing to the development of their J-31 stealth fighter. Supply chain attacks like this are becoming more frequent and damaging, as seen in high-profile cases such as the SolarWinds and Kaseya attacks. According to Ryan Heidorn, co-founder of Steel Root, adversaries are stealing intellectual property at an alarming rate, targeting large primes like Lockheed Martin and smaller suppliers that may lack sophisticated cybersecurity. The CMMC aims to curb this issue by ensuring DoD contractors implement strict cybersecurity practices. While many companies already face these requirements, CMMC enforces compliance through assessments and certification, making it a critical mechanism to prevent the loss of sensitive information. The goal is to protect valuable defense technology, like the F-35, from further theft as adversaries like China continue to target critical U.S. systems. In the context of the 2007 theft of sensitive F-35 Lightning II technical documents and other similar supply chain attacks, how could the PASTA (Process for Attack Simulation and Threat Analysis) methodology enhance defense contractors' and DoD vendors' overall security process to prevent future data breaches?

Stаdiums Are Embrаcing Fаce Recоgnitiоn. Privacy Advоcates Say They Should Stick to Sports[1] Facial recognition technology is being increasingly adopted by major sports leagues like the MLB and NFL to streamline fan entry and enhance security. However, this trend has sparked concerns among privacy advocates who argue that the technology poses significant risks to individual privacy. Supporters of facial recognition argue that it offers several benefits, such as reducing wait times at stadium entrances and improving security measures. Facial recognition allows fans to opt for express entry lanes, often bypassing longer queues. Additionally, the technology can aid in identifying potential security threats and facilitating faster entry for authorized personnel. On the other hand, critics raise concerns about law enforcement agencies' potential misuse of facial recognition data. They argue that the technology could track individuals' movements, monitor their activities, and even identify protesters or dissidents. Furthermore, there are concerns about the accuracy of facial recognition systems, which can lead to false positives and wrongful identifications. While some teams and leagues have implemented strict privacy measures and obtained explicit consent from fans, others have been criticized for their lack of transparency and potential overreach. Facial recognition in sports raises broader questions about the balance between security, convenience, and individual privacy rights. As this technology continues to evolve, it is crucial to have open discussions and establish robust regulations to safeguard against potential abuses. [1] Based on the WIRED article: https://www.wired.com/story/face-recognition-stadiums-protest/   Given the discussion in the previous article, some groups defend that surveillance provides security to the audience and that loss of privacy is okay, given that its benefits compensate the malefic. Why is this discourse not supported by modern privacy practices? Which principle of Privacy by Design is violated? Justify your answer. Give one suggestion on how to help the increase of surveillance without losing privacy.   Rubric threat Points Explanation of Why the Discourse is Not Supported by Modern Privacy Practices (10 Points) 9-10 Points: Thorough explanation that clearly identifies key privacy principles (e.g., transparency, proportionality, consent) and describes why the trade-off between security and privacy is flawed. Incorporates specific examples or evidence from the article. 6-8 Points: Clear explanation of why the discourse is problematic but lacks depth or examples from the article. It covers general privacy concerns without linking them explicitly to the argument. 3-5 Points: Partial explanation with vague reasoning; limited or no connection to modern privacy principles or the article. 0-2 Points: No valid explanation provided or severely off-topic response.   Identification and Explanation of the Violated Privacy Design Principle (10 Points) 9-10 Points: Correctly identifies the most relevant design principle (e.g., Visibility and Transparency, Respect for User Privacy, or Positive-Sum Functionality) and provides a detailed, accurate explanation of why it is violated. Links the violation to the facial recognition system and its potential for misuse. 6-8 Points: Identifies a relevant design principle and provides a reasonable explanation but lacks specificity or clear connection to the article. 3-5 Points: Identifies a design principle but provides little or no explanation, or the explanation is incomplete or inaccurate. 0-2 Points: Fails to identify a relevant design principle or provides an incorrect or irrelevant explanation.   Suggestion for Supporting Surveillance Without Losing Privacy (10 Points)   9-10 Points: Provide a creative and feasible suggestion that aligns with privacy principles (e.g., anonymized biometrics, decentralized processing). It clearly explains how it balances surveillance with privacy and links to the scenario discussed in the article. 6-8 Points: The suggestion is reasonable and somewhat aligned with privacy principles, but it lacks depth or a clear connection to the problem. 3-5 Points: Suggestion is vague, impractical, or only partially addresses privacy concerns. 0-2 Points: No suggestion was provided or did not address privacy concerns.    

Comments are closed.