Kathy Baxter, Chief Architect of Salesforce Ethical AI Application, said AI developers must act quickly to develop and deploy systems that address algorithmic bias. In an interview with ZDNET, Baxter highlighted the need for diverse representations in datasets and user research to ensure fair and unbiased AI systems. She also stressed the importance of making AI systems transparent, understandable and accountable while protecting individual privacy. Baxter highlights the need for cross-industry collaboration, such as the model used by the National Institute of Standards and Technology , so that we can develop robust and secure AI systems that benefit everyone. One of the key questions in AI ethics is how to enable AI systems to be developed and deployed without reinforcing existing social biases or creating new ones. To achieve this, Baxter stressed the importance of asking who benefits from AI technology and who pays. It is very important to consider the datasets used and make sure they represent everyone's voice. Identifying potential harms through inclusion and user research during the development process is also important. Also AI expert says ChatGPT has zero intelligence, but a revolution in usability "This is one of the fundamental questions we need to discuss," Baxter said. “Women of color in particular have been asking this question and have been doing research in this area for years. I'm excited to see so many people talk about it, especially in relation to the use of productive AI. But what we need is basically, ask who benefits and who pays for this technology. including their voices?" Social bias can be instilled into AI systems through the datasets used to train them. Non-representative datasets containing biases, such as datasets that lack predominantly racial or cultural differentiation, can result in biased AI systems. Also, applying AI systems unevenly in society can perpetuate existing stereotypes. To make AI systems transparent and understandable to the average person, it is crucial to prioritize explainability in the development process. Techniques such as "thought chain prompts" can help AI systems demonstrate their work and make decision-making processes more understandable. User research is also vital to ensure that descriptions are clear and users can identify uncertainties in AI-generated content. Also AI can automate 25% of all jobs. Here are those most at risk Protecting individuals' privacy and ensuring responsible AI use requires transparency and consent. Salesforce follows responsibly productive AI guidelines that include respecting the data source and using customer data only with consent. Allowing users to enable, disable, or have control over their data use is critical to privacy. “We only use customer data when we have their consent,” Baxter said. "It's really important to be transparent with someone's use of their data, let them participate, and let them come back and say when they no longer want their data included." As the competition for innovation in productive AI intensifies, it is more important than ever to maintain human control and autonomy over increasingly autonomous AI systems. Empowering users to make informed decisions about the use of AI-generated content and keeping a human in the loop can help maintain control. Ensuring that AI systems are secure, reliable and usable is critical; To achieve this, industry-wide collaboration is vital. Baxter praised the AI ​​risk management framework created by NIST, which includes more than 240 experts from various industries. This collaborative approach provides a common language and framework for identifying risks and sharing solutions. Failure to address these ethical AI issues can have serious consequences, as seen in unfair arrests resulting from facial recognition errors or the creation of harmful s. Investing in countermeasures and focusing solely on the here and now rather than potential future damage can help alleviate these issues and ensure the responsible development and use of AI systems. Also How does ChatGPT work? While the future of AI and the possibility of AI are intriguing topics, Baxter emphasizes the importance of focusing on the present. Enabling responsible AI use today and addressing social biases will better prepare society for future AI developments. By investing in ethical AI practices and collaborating across industries, we can help create a safer and more inclusive future for AI technology. “I think the timeline is very important,” Baxter said. “We have to really invest in the here and now and create that muscle memory, create these resources, create regulations that allow us to keep moving forward but do so safely.” Gotopnews.com