The high-level meetup will bring together forward-thinking brands, market leaders, AI evangelists and hot start-ups to explore and debate the advancements in Artificial Intelligence and the impacts within the Enterprise & Consumer sectors. Topics covered include Deep Learning, Business Intelligence, Machine Learning, Deep Learning, AI Algorithms & Technologies, Data & Analytics, Robotics, Virtual Assistants & Chatbots as well as case study based presentations proving an insight into the deployment of AI across different verticals. AI is the ability for computers and machines to learn, without explicitly being programed. It has been described by many as the "new electricity" since it has the potential to transform and permeate every major industry and sector. Next generation digital public services are powered by government data and artificial intelligence like chatbots and smart forms. The intelligent automation of programs helps make our public services more open, responsive, informative, and accessible for the people.
Artificial intelligence (AI) holds substantial promise for improving human life and economic competitiveness in a variety of ways and for helping solve some of society's most pressing challenges. At the same time, according to experts, AI poses new risks and could displace workers and widen socioeconomic inequality. To gain a better understanding of the emerging opportunities, challenges, and implications resulting from developments in AI, the Forum on Artificial Intelligence has been held as part of the Knowledge Engineering 2018.
"Artificial Intelligence is a constantly developing technology that will likely touch every aspect of our lives," said Congresswoman Stefanik. "AI has already produced many things in use today, including web search, object recognition in photos or videos, prediction models, self-driving cars, and automated robotics. It is critical to our national security but also to the development of our broader economy that the United States becomes the global leader in further developing this cutting edge technology. This legislation I have introduced today will develop a commission to review advances in AI, identify our nation’s AI needs and make actionable recommendations of what direction we need to take. I look forward to advocating for this approach during the NDAA process this year."
Forum participants noted a range of opportunities and challenges related to artificial intelligence (AI), as well as areas needed for future research and for consideration by policymakers. Regarding opportunities, investment in automation through AI technologies could lead to improvements in productivity and economic outcomes, similar to that experienced during previous periods of automation, according to a forum participant. In cybersecurity, AI automated systems and algorithms can help identify and patch vulnerabilities and defend against attacks. Automotive and technology firms use AI tools in the pursuit of automated cars, trucks, and aerial drones. In criminal justice, algorithms are automating portions of analytical work to provide input to human decision makers in the areas of predictive policing, face recognition, and risk assessments. Many financial services firms use AI tools in areas like customer service operations, wealth management, consumer risk profiling, and internal controls.
At the forum, participants from industry, government, academia, and nonprofit organizations considered the potential implications of AI developments in four sectors--cybersecurity, automated vehicles, criminal justice, and financial services. Participants considered policy implications of broadening AI use in the economy and society, as well as associated opportunities, challenges, and areas in need of more research. Following the forum, participants were given the opportunity to review a summary of forum discussions and case studies.
Forum participants also highlighted a number of challenges related to AI. For example, if the data used by AI are biased or become corrupted by hackers, the results could be biased or cause harm. The collection and sharing of data needed to train AI systems, a lack of access to computing resources, and adequate human capital are also challenges facing the development of AI. Furthermore, the widespread adoption of AI raises questions about the adequacy of current laws and regulations. Finally, participants noted the need to develop and adopt an appropriate ethical framework to govern the use of AI in research, as well as explore factors that govern how quickly society will accept AI systems in their daily lives.
After considering the benefits and challenges of AI, forum participants highlighted several policy issues they believe require further attention. In particular, forum participants emphasized the need for policymakers to explore ways to (1) incentivize data sharing, such as providing mechanisms for sharing sensitive information while protecting the public and manufacturers; (2) improve safety and security (e.g., by creating a framework that ensures that the costs and liabilities of providing safety and security are appropriately shared between manufacturers and users); (3) update the regulatory approach that will affect AI (e.g., by leveraging technology to improve and reduce the burden of regulation, while assessing whether desired outcomes are being achieved); and (4) assess acceptable levels of risk and ethical considerations (e.g., by providing mechanisms for assessing tradeoffs and benchmarking the performance of AI systems). As policymakers explore these and other implications, they will be confronted with fundamental tradeoffs, according to forum participants. As such, participants highlighted several areas related to AI they believe warrant further research, including (1) establishing regulatory sandboxes (i.e., experimental safe havens where AI products can be tested); (2) developing high-quality labeled data (i.e., data organized, or labeled, in a manner to facilitate their use with AI to produce more accurate outcomes); (3) understanding the implications of AI on training and education for jobs of the future; and (4) exploring computational ethics and explainable AI, whereby systems can reason without being told explicitly what to do and inspect why they did something, making adjustments for the future.