# James B. Pollack > Art, Innovation, and Complexity. MFA in Digital Arts and New Media from UC Santa Cruz. First-generation college graduate from Yale University (English). Contributor to OpenAI's GPT-2. Patent holder (US20160173960A1). James B. Pollack bridges breakthrough technologies with human experiences, translating cutting-edge AI/ML capabilities into transformative interactive installations, spatial computing interfaces, and generative creative tools. From Meta's moonshot prototypes to open-source contributions to GPT-2, he helps teams discover emergent possibilities in complex systems and build the tools that make them real. He has worked with computer animation pioneers like [Larry Cuba](https://en.wikipedia.org/wiki/Larry_Cuba) to help them create art they thought was impossible. ## About First-generation college graduate passionate about translating cutting-edge technology into transformative human experiences. Obsessed with complexity science -- not for a degree, just because he needed to understand how things emerge. This led from studying AI and art at UC Santa Cruz's Expressive Intelligence Studio to contributing code to OpenAI's GPT-2, to prototyping the future of spatial computing at Meta, to attending the [Santa Fe Institute](https://www.santafe.edu)'s Symposium on Collective Intelligence. His work lives at the intersection where breakthrough technologies meet human creativity. He builds tools that let artists harness AI without coding, turns facial motion capture into browser-based magic, and helps computer animation pioneers like [Larry Cuba](https://en.wikipedia.org/wiki/Larry_Cuba) create art they thought was impossible. He looks at emerging technologies and asks: "What does this want to become? What emergent possibilities are hiding here that nobody's seeing yet?" ## Portfolio ### Meta - Generative AI & AR (2022-2025) Senior Technical Artist roles spanning generative AI, augmented reality, avatars, and artists' tools. - Generative AI creation tools and workflows for rapid prototyping - Alpha tester for Meta Ray-Ban Display Glasses - Prototyped augmented reality experiences and avatar systems - Built tools for artists and creators working with AI Technologies: Generative AI, Augmented Reality, Avatar Systems, Creator Tools, Prototyping ### Amazon Originals - Interactive TV (2020-2021) Software Engineer at Amazon Original Television. Built web-based interactive experiences for original television shows, through Left Field Labs. - Tens of millions of users: pages published as flagship content in the catalog - Global reach: released across 30+ locales and 10 writing systems - Run everywhere: JavaScript on desktop, mobile web, Android, and Smart TVs - Architected and configured UI component library - Designed system for delivering title-specific global typesetting, reducing font file sizes by 30x Technologies: JavaScript, Smart TV, Multi-platform, Internationalization, UI Component Architecture ### Hyprsense - Facial Motion Capture (2019) Software Engineer - Augmented Reality at Hyprsense. The company built a real-time facial motion capture SDK for live 3D animation and was acquired by Epic Games. - Specialized work in WebGL and GLSL shaders for real-time facial animation in web browsers - Built "Recipe Maker" to create schemas that connect 3D characters to data from the real-time SDK - Built "Does It Look Good" to compare user output vs standard model in real time - Created documentation site for Recipes - Built front-end for internal continuous integration system Technologies: WebGL, GLSL, Real-time Animation, Augmented Reality, Facial Motion Capture ### Eyegroove - Interactive Music Videos (2014) Software Engineer at Eyegroove creating tools for creating short music videos with augmented reality effects. Patent holder: US20160173960A1 -- methods and systems for generating audiovisual media items, where a server receives a creation request with audio and visual media, generates the item, and stores it. Acquired by Facebook in August 2016. - Built a cross-platform HTML5 video player that was embeddable across the web - Internal tools for curation and moderation - Video player integrations with social networks - Chat bot that conversed with users to discover their music video preferences and direct them to content - Effect technology was integrated into Instagram, WhatsApp, Messenger, and Facebook Technologies: HTML5 Video, AR Effects, Social Media Integration, Chat Bots, Cross-platform ### High Fidelity - Social VR (2015-2016) Software Engineer at High Fidelity, Inc., creating interactive experiences for connected, open-source virtual reality worlds. - Content scripting in JavaScript to bring virtual worlds to life - Built objects for use with prototype hand controllers from Vive and Oculus: a bow & arrow, xylophone, tilt-maze, ping pong gun, whiteboard with markers, etc. - Organized and audited 400+ scripts for the product launch - More than 150 pull requests merged into the main product - Spatial UX prototyping for Steam launch Technologies: Virtual Reality, JavaScript, Spatial UX, Hand Controllers, Open Source, Vive, Oculus ## Experience Timeline - **2014**: Eyegroove -- Software Engineer. Interactive Music Videos. Patent: US20160173960A1. Acquired by [Facebook](https://about.meta.com/company-info/) (2016). - **2015-2016**: High Fidelity, Inc. -- Software Engineer. VR platform. 150+ merged pull requests for Steam launch. Prototyped spatial UX paradigms. - **2017-2022**: [Playable Future, LLC](https://www.playablefuture.com) -- Freelance Creative Technologist. Delivered immersive 3D experiences, interactive installations, and physical products for [Visa](https://www.visa.com) (w/[AKQA](https://www.akqa.com)), IBM Watson, and SnackNation. - **2019**: Hyprsense -- Software Engineer. Real-time facial motion capture SDK. Acquired by [Epic Games](https://www.epicgames.com). - **2019**: Open-source contributor to [OpenAI's GPT-2](https://github.com/openai/gpt-2) -- contributed code to the landmark language model that helped catalyze the generative AI revolution and led to [ChatGPT](https://openai.com/chatgpt). - **2020-2021**: Amazon -- Software Engineer, Original Television. Interactive experiences for 5 shows, 30+ locales. Novel typesetting system reducing font sizes by 30x. - **2022-2025**: Meta -- Senior Technical Artist. Generative AI creation tools and workflows. AR prototypes and avatar systems. Alpha tester for [Meta Ray-Ban Display Glasses](https://www.meta.com/glasses/). ## Education - **Yale University** -- Bachelor's degree in English. Mentored in fiction by John Crowley and digital literature by Jessica Pressman. - **University of California, Santa Cruz** -- Master of Fine Arts in Digital Arts and New Media. Studied AI and art at the Expressive Intelligence Studio. ## Services - **Design**: Master of Fine Arts, the terminal degree in Digital Arts and New Media - **Develop**: Build with emerging technologies. From generative AI systems to spatial computing, shipped applications across Smart TVs, web, mobile, AR/VR, and interactive installations - **Write**: Yale English graduate, mentored in fiction by John Crowley and digital literature by Jessica Pressman - **Research**: Explore the undiscovered through prototyping, scenario planning, and strategic innovation ## Skills & Expertise Generative AI, Augmented Reality, Virtual Reality, WebGL, GLSL, Creative Technology, Interactive Installations, Spatial Computing, JavaScript, TypeScript, HTML5, CSS, Node.js, React, Next.js, Python, Physical Computing, Motion Capture, 3D Animation ## Networks & Affiliations Meta, Santa Fe Institute, Singularity University, Yale Alumni, UC Santa Cruz Alumni ## Contact - Email: james@jamesbpollack.com - LinkedIn: https://linkedin.com/in/jamespollack - GitHub: https://www.github.com/imgntn - Website: https://www.jamesbpollack.com