Beyond Big Brother AI’s Role in Safety, Identity, and Innovation Across East Asia

Published 12/03/2024 in Scholar Travel Stipend
Written by Hannah Kim | 12/03/2024

I traveled to Hong Kong and South Korea, two of Asia’s most technologically advanced regions, to broaden my understanding of AI governance and technology ethics. The experience showed me how these societies integrate AI into daily life, balance innovation with ethical considerations, and address the complicated interplay between privacy, security, and cultural values.

Overall, this trip expanded my perspective on global AI governance, challenging my assumptions rooted in Western-centric frameworks and providing a nuanced understanding of how technology operates within distinct cultural, political, and economic contexts. The ubiquity of AI in surveillance, entertainment, advertising, and even personal identity across these regions revealed both the potential and perils of advanced technology in shaping human lives.

One of the most striking observations during my time in Hong Kong and South Korea was the prevalence of surveillance technologies. Closed-circuit television (CCTV) cameras are not just tools for monitoring public spaces but are deeply integrated into urban life. In South Korea, for instance, smart city initiatives have woven AI-powered cameras into urban infrastructure, capable of recognizing faces, tracking movement, and predicting behaviors (Figure 1). These systems are touted as vital for ensuring public safety, particularly in a densely populated and highly urbanized society. My Korean family members view this level of surveillance as a necessary compromise for communal safety, reflecting Confucian values that prioritize harmony and collective well-being. This perspective contrasts sharply with Western privacy-centric norms, where such pervasive monitoring might evoke fears of government overreach or "Big Brother" scenarios.

In Hong Kong, the ethical challenges of surveillance are deeply related with the city’s complex political and social context, where technology often serves as both a tool of progress and a potential instrument of control. AI-driven surveillance technologies, such as facial recognition cameras and behavior monitoring systems, have been increasingly deployed in public spaces under the justification of maintaining order and enhancing safety (Figure 2). However, their use has sparked widespread concern, particularly in the context of protests and political dissent. When I spoke with a fellow Stanford peer and Hong Kong resident, he elaborated on how these technologies have created a pervasive sense of unease. He highlighted that while many citizens recognize the potential benefits of such systems—such as their ability to streamline urban management or improve public safety—they also grapple with the reality that these tools could be weaponized to suppress dissent.

This duality reflects an interesting tension: the desire for technological progress versus the risks of entrenching power imbalances. My peer pointed out how this tension plays out in everyday life, where the presence of surveillance systems fosters a "chilling effect," discouraging open expression in public spaces. He emphasized that this is not simply a technical issue but a deeply human one, requiring a governance framework that respects individual liberties while navigating the demands of societal security. These conversations underscored the need for ethical governance frameworks that are not only technically sophisticated—ensuring systems are secure, transparent, and accountable—but also culturally and politically attuned to the environments in which they are implemented. In Hong Kong, where historical sensitivities around freedom and control remain vivid, these frameworks must acknowledge and address the social trust deficit exacerbated by surveillance technologies, fostering a balance between innovation and the protection of fundamental human rights.

Beyond surveillance, AI is reshaping the cultural fabric of these societies, particularly in the realm of entertainment and media. In South Korea, AI-generated deepfakes have become a ubiquitous part of advertising and entertainment. Some of the nation’s beloved idols—figures central to its globally renowned K-pop industry—are now entirely synthetic. Virtual influencers and AI-generated celebrities dominate advertisements, music videos, and even social media, blurring the lines between reality and fiction.

This phenomenon raises profound ethical questions about authenticity, identity, and labor. Conversations I had with my friends in Seoul revealed a mix of fascination and unease. On one hand, virtual idols eliminate the human toll of the entertainment industry, such as the grueling schedules and invasive scrutiny faced by real performers. On the other hand, they challenge traditional notions of artistry, creativity, and human connection. What does it mean for fans to form emotional bonds with entities that have no lived experiences? And how might the proliferation of synthetic personalities affect societal values around individuality and authenticity?

In Hong Kong, AI is similarly leveraged in commercial and creative industries. Deepfake technology is widely used in advertisements, allowing brands to create hyper-targeted content. While these applications showcase the power of AI to enhance creative expression, they also introduce risks of manipulation and misinformation. The ethical implications of deepfake technology were a recurring topic in my discussions with peers, particularly around its potential for misuse in political and social contexts. This duality—AI as both a tool for innovation and a source of ethical complexity—was a recurring theme throughout my trip.hongkong 1

One of the key lessons I learned is the extent to which cultural values shape the ethical frameworks surrounding AI. In South Korea, a high level of trust in government institutions and a collective mindset contribute to a more accepting attitude toward AI’s integration into public life. This contrasts with the more individualistic ethos of Western societies, where skepticism toward authority often fuels resistance to surveillance and automation. In Hong Kong, the political landscape has a significant impact on how AI is perceived and utilized. The city’s history as a global financial hub fosters a pragmatic approach to technology, emphasizing its economic and functional benefits. Yet, its political realities introduce a layer of complexity, as AI is increasingly seen as both an enabler of progress and a potential tool for order.

Finally, perhaps the most striking realization during my trip was the sheer ubiquity and normalization of AI in everyday life. This mindset shift—AI as ordinary, even inevitable—was particularly evident among the younger generations I interacted with in both Hong Kong and South Korea. AI is a deeply pragmatic tool embedded in their daily routines, from AI-powered translation apps to recommendation algorithms for entertainment and shopping. This attitude is beginning to take root in the United States as well, where AI is increasingly seen as a utility rather than a novelty. However, the cultural and regulatory contexts differ significantly. In the U.S., debates about AI often center on individual rights, corporate responsibility, and market-driven innovation. By contrast, my experiences in East Asia highlighted a collective approach, where societal benefit and state-led initiatives play a more prominent role. The normalization of AI in these societies also serves as a reminder that ethical frameworks must evolve alongside technological adoption. As AI becomes increasingly ubiquitous, the challenge is not only to address its immediate impacts but also to anticipate its long-term implications for privacy, identity, and societal cohesion.

The Milken Institute and the Milken Family Foundation are united by their commitment to creating meaningful, measurable progress in addressing critical societal challenges. The Milken Institute focuses on accelerating solutions to pressing global issues, fostering innovation, and promoting inclusive growth, while the Milken Family Foundation emphasizes empowering individuals through education and advancing opportunities for personal and societal enrichment. My research into AI governance and tech ethics directly aligns with these missions by addressing the ethical, cultural, and societal complexities of rapidly advancing technology—an area with great implications for global well-being.

By examining how societies in Hong Kong and South Korea integrate AI into daily life while navigating ethical trade-offs, my research relates to building inclusive frameworks for responsible AI development. This aligns with the Milken Institute’s goal of fostering innovation that benefits society as a whole. Furthermore, my exploration of how these regions prepare the next generation to grapple with AI’s ethical challenges resonates with the Milken Family Foundation’s dedication to education. I hope I will be able to promote ethical preparedness in technology one day, empowering individuals worldwide to lead productive and meaningful lives in an AI-driven future.