<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Technology Archives - Future of Cinema - by Dario Riccio</title>
	<atom:link href="https://www.darioriccio.com/en/category/technology/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.darioriccio.com/en/category/technology/</link>
	<description>Tecnologia e Business</description>
	<lastBuildDate>Thu, 16 Nov 2023 09:07:43 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.6.2</generator>

 
	<item>
		<title>AI in Film Production: How to Produce an Innovative Low-Budget Film</title>
		<link>https://www.darioriccio.com/en/ai-in-film-production-how-to-produce-an-innovative-low-budget-film/</link>
					<comments>https://www.darioriccio.com/en/ai-in-film-production-how-to-produce-an-innovative-low-budget-film/#respond</comments>
		
		<dc:creator><![CDATA[Dario Riccio]]></dc:creator>
		<pubDate>Sun, 12 Nov 2023 12:29:56 +0000</pubDate>
				<category><![CDATA[Cinema]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[immersive cinema]]></category>
		<category><![CDATA[innovation]]></category>
		<guid isPermaLink="false">https://www.darioriccio.com/ai-in-film-production-how-to-produce-an-innovative-low-budget-film/</guid>

					<description><![CDATA[<p>In today&#8217;s article, I want to start by introducing you to a major project we are working on with the usual nExt, combining artificial intelligence and cinema. We are using AI for cinematic production in an immersive 360-degree film, initially projected in mobile domes. Leveraging all the latest technologies to be a memorable experience that [&#8230;]</p>
<p>L'articolo <a href="https://www.darioriccio.com/en/ai-in-film-production-how-to-produce-an-innovative-low-budget-film/">AI in Film Production: How to Produce an Innovative Low-Budget Film</a> è apparso primo per <a href="https://www.darioriccio.com/en/">Future of Cinema - by Dario Riccio</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>In today&#8217;s article, I want to start by introducing you to a major project we are working on with <strong><em>the usual nExt</em></strong>, combining artificial intelligence and cinema. We are using AI for cinematic production in an immersive 360-degree film, initially projected in mobile domes. Leveraging all the latest technologies to be a memorable experience that will combine real and virtual. Basically: <strong>a milestone for what will be the cinema of the future</strong>.</p>
<div class="wp-block-image">
<figure class="alignright size-medium"><img fetchpriority="high" decoding="async" width="300" height="300" src="https://www.darioriccio.com/wp-content/uploads/2023/11/dome-300x300.jpg" alt="People in an imaginary dome with projections of fish and planets in a large sky." class="wp-image-864" srcset="https://www.darioriccio.com/wp-content/uploads/2023/11/dome-300x300.jpg 300w, https://www.darioriccio.com/wp-content/uploads/2023/11/dome-150x150.jpg 150w, https://www.darioriccio.com/wp-content/uploads/2023/11/dome.jpg 512w" sizes="(max-width: 300px) 100vw, 300px" /></figure></div>
<p>It promises to leave a deep mark and represent a significant turning point in the history of cinema. For a more democratic cinema, more present among the people, uniting&#8230; What was the original cinema, and what it has failed to be for so long.</p>

<p>In this article I want to talk broadly, technically, about the basic idea. And share with you some possible areas of using artificial intelligence to get the most out of it on an, intentionally, small budget.</p>

<p>We can also consider it an update, at a much more advanced stage, of the last article written in February 2023 entitled. <em>
  <a href="https://www.darioriccio.com/en/how-to-make-low-budget-films-with-artificial-intelligence-early-stages/">How to Make Low Budget Movies with Artificial Intelligence &#8211; First Steps.</a>
</em>.  </p>

<h2 class="wp-block-heading">The revolutionary impact of Artificial Intelligence in cinema</h2>

<p>At one time, special effects and narrative techniques were the magic wands of cinema, but today artificial intelligence is playing a revolutionary card. It is an incredible help, an accomplice that opens the door to unprecedented innovation. We are riding this wave, with AI by our side lending a hand in creating compelling stories, digital characters that feel real, and music that gets right to your heart. But let us not forget the human touch, the real beating heart of each of our creations.</p>

<p>&#8220;Artificial intelligence and cinema&#8221; is not a token motto, but the beginning of a new chapter in storytelling. We are ready to prove that the future of entertainment is no longer a distant dream: it is here, and it is animated by artificial intelligence.</p>

<h2 class="wp-block-heading">Luna Dolph, Kyle and China: from virtual to real life</h2>
<div class="wp-block-image">
<figure class="alignright size-medium"><img decoding="async" width="300" height="300" src="https://www.darioriccio.com/wp-content/uploads/2023/11/robot-and-teen-300x300.jpg" alt="Drawing of a robot with a baby" class="wp-image-877" srcset="https://www.darioriccio.com/wp-content/uploads/2023/11/robot-and-teen-300x300.jpg 300w, https://www.darioriccio.com/wp-content/uploads/2023/11/robot-and-teen-150x150.jpg 150w, https://www.darioriccio.com/wp-content/uploads/2023/11/robot-and-teen.jpg 512w" sizes="(max-width: 300px) 100vw, 300px" /></figure></div>
<p>The story, scripted by <em>
  <strong>
    <a href="https://www.linkedin.com/in/gbernasconi?originalSubdomain=fr" target="_blank" rel="noreferrer noopener">Gérard Bernasconi</a>
  </strong>
</em>, starts with the story of <em>Luna Dolph</em>, <em>Kyle</em> and <em>China</em>; not mere characters, but three-dimensional avatars breathing life into a fascinating and complex virtual world. These avatars are not only protagonists of a story, but <strong>symbols of our age</strong>. Vivid representations of <strong>our progressive immersion in the virtual universe</strong>, also known as the <em>metaverse</em>. This concept, once relegated to science fiction, is rapidly taking shape and influencing diverse industries, from video games to social networking platforms, transforming the way we interact, work and have fun.</p>

<h3 class="wp-block-heading">Virtual Reality and Digital Identities</h3>
<div class="wp-block-image">
<figure class="alignleft size-medium"><img decoding="async" width="300" height="300" src="https://www.darioriccio.com/wp-content/uploads/2023/11/artificial-intelligence-and-cinema-300x300.jpg" alt="A girl and a robot together in a movie theater" class="wp-image-872" srcset="https://www.darioriccio.com/wp-content/uploads/2023/11/artificial-intelligence-and-cinema-300x300.jpg 300w, https://www.darioriccio.com/wp-content/uploads/2023/11/artificial-intelligence-and-cinema-150x150.jpg 150w, https://www.darioriccio.com/wp-content/uploads/2023/11/artificial-intelligence-and-cinema.jpg 512w" sizes="(max-width: 300px) 100vw, 300px" /></figure></div>
<p>Our goal is not only to tell a compelling story, but to invite the audience to <strong>reflect on the nature of reality in a digital age</strong>. These avatars, with their complex interactions and intricate worlds, represent the ongoing fusion of our physical and digital lives. A convergence that is redefining the very meaning of identity, community and belonging.</p>

<p>At the core of our narrative we find Luna Dolph, Kyle and China. They are three three-dimensional avatars whose lives take place in a virtual world of extraordinary beauty. While existing in the digital ether, their story is a bridge to reality, a means of reminding our viewers of the irreplaceable value of human interaction and real life. In an age when digitization has transformed our ways of connecting, our narrative aims to use technology not as a refuge, but as a springboard to rediscover and reinvigorate authentic and tangible sociality.</p>

<h3 class="wp-block-heading">Technology helps you live better</h3>

<p>As Luna, Kyle and China navigate a fascinating metaverse, their experiences and challenges resonate with a clear message: <strong>technology, however advanced, is a tool that, when used wisely, can enrich but not replace the warmth and complexity of human connections</strong>. Our goal is to bring out the realization that despite the attractiveness of digital, real life takes place off the screen. In the shared laughter, handshakes, hugs and spontaneous moments that make up the fabric of our existence.</p>

<p>With this storytelling, we aspire to inspire viewers to lift their gaze from their devices and re-immerse themselves in the real world, enriching their lives with authentic experiences. Through the exploration of virtual worlds, we want to celebrate and promote the beauty and irreplaceable importance of real life and human sociality.</p>

<p>The film will represent the first outing in the real world for the three main characters.</p>

<h2 class="wp-block-heading">How we use AI in film production</h2>

<p>We are still in the pre-production stage, so from a practical point of view I will keep you updated in the coming months. For now, we have a rough idea: we have selected a compendium of the latest artificial intelligence (AI) technologies that are both affordable and available to all.</p>

<p>The film will be in <strong>fulldome</strong>, immersive 360&#215;180-degree format. We will basically project it into domes and planetariums. This is a crucial innovation looking to the future as we push more and more toward total immersiveness. And we have a limited budget, let&#8217;s say between 10 and 20,000 euros at a glance. The film will last about 40 minutes, and will be about 30 percent in the virtual world of Luna and her friends (entirely recreated in <em>
  <a href="https://www.unrealengine.com/" target="_blank" rel="noreferrer noopener">Unreal Engine</a>
</em>), for the remaining 70% in the real world.</p>

<h3 class="wp-block-heading">Using Luma AI for the cinema of the future</h3>

<p>To begin talking about this technological arsenal I mention the following <strong><em><a href="https://lumalabs.ai">Luma AI</a></em></strong>, a pioneering solution that completely revolutionizes the generation of three-dimensional environments. <strong>Luma AI allows us to reproduce real environments within Unreal Engine, in photorealistic quality and </strong>even<strong> with an iPhone</strong>. Employing advanced technologies such as the  <a href="https://www.matthewtancik.com/nerf" target="_blank" rel="noreferrer noopener"><strong>Neural Radiance Fields</strong></a>  (NeRF) and the brand new  <strong><a href="https://arxiv.org/pdf/2308.04079.pdf" target="_blank" rel="noreferrer noopener">Gaussian Splatting</a></strong>  (the latter published just three months ago by the University of the Côte d&#8217;Azur, France), we can capture the complexity and richness of the real world in digital format, put it into Unreal Engine (including the ability to move freely within the scene) and bring to life scenes previously relegated only to large budgets.</p>
<div class="wp-block-image">
<figure class="alignright size-medium"><img loading="lazy" decoding="async" width="300" height="181" src="https://www.darioriccio.com/wp-content/uploads/2023/11/lumaai-300x181.jpg" alt="Screenshot of the Luma AI website" class="wp-image-885" srcset="https://www.darioriccio.com/wp-content/uploads/2023/11/lumaai-300x181.jpg 300w, https://www.darioriccio.com/wp-content/uploads/2023/11/lumaai-1024x619.jpg 1024w, https://www.darioriccio.com/wp-content/uploads/2023/11/lumaai-768x464.jpg 768w, https://www.darioriccio.com/wp-content/uploads/2023/11/lumaai-1536x928.jpg 1536w, https://www.darioriccio.com/wp-content/uploads/2023/11/lumaai-2048x1237.jpg 2048w" sizes="(max-width: 300px) 100vw, 300px" /></figure></div>
<p>We lower the costs associated with creating detailed settings and complex objects, as well as streamline and speed up the production process. Luma AI not only democratizes access to world-class technologies, but also gives us the tools to experiment at a speed that was previously unimaginable. And it allows us to impart an unprecedented level of vividness and depth to our scenes.</p>

<h3 class="wp-block-heading">Skybox AI for cinema &#8211; Simplified lighting on Unreal Engine</h3>
<div class="wp-block-image">
<figure class="alignleft size-medium"><img loading="lazy" decoding="async" width="300" height="168" src="https://www.darioriccio.com/wp-content/uploads/2023/11/skyboxAI-300x168.jpg" alt="Skybox AI user interface" class="wp-image-888" srcset="https://www.darioriccio.com/wp-content/uploads/2023/11/skyboxAI-300x168.jpg 300w, https://www.darioriccio.com/wp-content/uploads/2023/11/skyboxAI-1024x575.jpg 1024w, https://www.darioriccio.com/wp-content/uploads/2023/11/skyboxAI-768x431.jpg 768w, https://www.darioriccio.com/wp-content/uploads/2023/11/skyboxAI-1536x862.jpg 1536w, https://www.darioriccio.com/wp-content/uploads/2023/11/skyboxAI-2048x1150.jpg 2048w" sizes="(max-width: 300px) 100vw, 300px" /></figure></div>
<p>Another crucial tool in our repertoire is <strong>
  <a href="https://skybox.blockadelabs.com" target="_blank" rel="noreferrer noopener">Skybox AI</a>
</strong> by <em>
  <a href="https://www.blockadelabs.com" target="_blank" rel="noreferrer noopener">Blockade Labs</a>
</em>, for creating immersive <em>skyboxes</em>. Enriching virtual scenes with vital details about lighting and setting.  </p>

<p>A <a href="https://it.wikipedia.org/wiki/Skybox_(videogiochi)" target="_blank" rel="noreferrer noopener">skybox</a> in Unreal Engine not only provides visually convincing surroundings (mountains, sky, distant houses, etc&#8230;), but <strong>also affects the overall lighting of 3D assets within the scene</strong>. This is what interests us most in filmmaking: it acts as an ambient light source, reflecting its colors and hues on objects, helping to <strong>create a consistent and realistic atmosphere</strong>. For example, a skybox depicting a sunset will infuse warm orange and red hues on the scene; while a night skybox will provide a cooler, dimmer light. This process helps integrate 3D assets into the surrounding environment, making the entire visual experience more immersive and coherent.</p>

<p>Aspect that proves essential in creating 360-degree fulldome environments. Every tiny detail is critical to sustaining the illusion of a fully immersive world.</p>

<h3 class="wp-block-heading">Using Kaiber AI for our film</h3>

<p><strong><a href="https://kaiber.ai" target="_blank" rel="noreferrer noopener">Kaiber AI</a> </strong>is useful and quality, but I have a serious doubt: I don&#8217;t know if it works with fulldome video. I haven&#8217;t had time to experiment with it yet, but it will possibly help smooth out the scenes so that the assets can be merged even better. It works through AI directly on the final video files.</p>

<h3 class="wp-block-heading">Artificial intelligence and cinema: Reverie AI</h3>

<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="608" src="https://www.darioriccio.com/wp-content/uploads/2023/11/reverieai-1024x608.jpg" alt="Reverie AI website screenshot" class="wp-image-890" srcset="https://www.darioriccio.com/wp-content/uploads/2023/11/reverieai-1024x608.jpg 1024w, https://www.darioriccio.com/wp-content/uploads/2023/11/reverieai-300x178.jpg 300w, https://www.darioriccio.com/wp-content/uploads/2023/11/reverieai-768x456.jpg 768w, https://www.darioriccio.com/wp-content/uploads/2023/11/reverieai-1536x911.jpg 1536w, https://www.darioriccio.com/wp-content/uploads/2023/11/reverieai-2048x1215.jpg 2048w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>

<p>In anticipation of the launch of <a href="https://reverieai.net" target="_blank" rel="noreferrer noopener">
  <strong>Reverie AI</strong>
</a>, we are stimulated by its promises to create virtual worlds for Unreal Engine almost by &#8220;copying&#8221; a pre-existing picture. Its ability to generate scenery that faithfully mimics reality, combined with its potential in color correction of virtual scenes, opens doors to unlimited possibilities in visual storytelling.  <em>Reverie AI</em> promises to be an excellent complement to our workflow, improving visual consistency and ensuring smooth and convincing transitions between the virtual and real worlds.</p>

<h3 class="wp-block-heading">Move.ai, cheap and working mocap</h3>

<p>Another &#8220;smart&#8221; tool we plan to use for our film will be <strong>Move.ai</strong>, specifically the inexpensive service <em>
  <a href="https://www.move.ai/single-camera" target="_blank" rel="noreferrer noopener">Move One</a>
</em> (on launch offer at $15 a month, then expected to cost $30). With a simple <a href="https://apps.apple.com/us/app/move-one/id6448635527" target="_blank" rel="noreferrer noopener">iPhone app</a>, we will be able to create simple, ready-made custom animations without too many fixes or clean-ups. This reduces time and cost, allowing you not to be limited by the animations already available in services such as <em>
  <a href="https://www.mixamo.com" target="_blank" rel="noreferrer noopener">Mixamo</a>
</em>, <em>
  <a href="https://actorcore.reallusion.com" target="_blank" rel="noreferrer noopener">ActorCore</a>
</em> or the <a href="https://mocap.market/">
  <em>MoCap Market</em>
</a> by Noitom.</p>

<p>Here a very quick video test:</p>

<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="Move.Ai single camera Motion capture Test &quot;MOVE ONE&quot;" width="678" height="381" src="https://www.youtube.com/embed/mJoZURcNiL4?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
</div></figure>

<p>Clearly, they also have the more expensive professional service that allows up to 8 rooms to be used. But we do not count on using it for this first production unless there is a very real need.  </p>

<h2 class="wp-block-heading">Speech-to-speech for film dubbing&#8230; Is it possible?</h2>

<p>In our production, innovation does not stop with the creation of the digital world; in fact, we also want to take advantage of the ongoing technological revolution in traditional aspects of filmmaking, such as <strong>dubbing</strong>.</p>

<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="1024" height="1024" src="https://www.darioriccio.com/wp-content/uploads/2023/11/speech-to-speech.jpeg" alt="Image generated with MidJourney showing a man from the future talking to a robot." class="wp-image-893" srcset="https://www.darioriccio.com/wp-content/uploads/2023/11/speech-to-speech.jpeg 1024w, https://www.darioriccio.com/wp-content/uploads/2023/11/speech-to-speech-300x300.jpeg 300w, https://www.darioriccio.com/wp-content/uploads/2023/11/speech-to-speech-150x150.jpeg 150w, https://www.darioriccio.com/wp-content/uploads/2023/11/speech-to-speech-768x768.jpeg 768w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>

<p>The voice of Luna and the other 3D avatars, must be consistent (not to say identical) in all media, both in the virtual and real worlds. We face a complex challenge: <strong>maintaining the same voice across multiple platforms</strong>, including social media and especially real-time streaming, <strong>without tying ourselves to a single voice actor</strong>. This allows us to have flexibility in storytelling and to adapt to various formats without depending on the availability of a specific actor.</p>

<p><strong>The idea is to replicate the voices of some real actors and associate them with avatars</strong>  (Luna, Kyle, and China for starters), then transforming any actor&#8217;s voice, in real time, into Luna&#8217;s, to use it for both dubbing movies and social content (as much in the original language as in translations), and for the  <em><a href="https://it.wikipedia.org/wiki/Live_streaming" target="_blank" rel="noreferrer noopener">live streaming</a></em>  superimposing these entries on the animated avatar in  <a href="https://docs.unrealengine.com/5.3/en-US/recording-face-animation-on-ios-device-in-unreal-engine/" target="_blank" rel="noreferrer noopener">motion capture</a>.</p>

<h3 class="wp-block-heading">From the excellent but expensive Respeecher, to the ambiguous Voicemod</h3>

<p>We explored options such as <a href="https://www.respeecher.com" target="_blank" rel="noreferrer noopener">
  <strong>Respeecher</strong>
</a>, an advanced speech-to-speech conversion tool, but the costs for real-time are prohibitive. We&#8217;re talking about 1,000 or 2,000 a month for a few hours of use. <strong>
  <a href="https://www.voicemod.net/ai-voices/" target="_blank" rel="noreferrer noopener">Voicemod</a>
</strong> presents itself as a cheaper solution because of its <em>AI voices</em>, although there are conflicting rumors about its reliability (some even consider it to be malware or <a href="https://www.kaspersky.it/resource-center/definitions/what-is-cryptojacking" target="_blank" rel="noreferrer noopener">cryptojacking</a>&#8230;). And there remains the problem of not having the rights to the voice, which they own, which will certainly prove to be a problem in the future. I do not yet know the cost for real-time conversion of <em>
  <strong>
    <a href="https://www.resemble.ai" target="_blank" rel="noreferrer noopener">Resemble.ai</a>
  </strong>
</em>, which I have used in the past for the much cheaper text-to-speech, and of <a href="https://www.veritonevoice.com" target="_blank" rel="noreferrer noopener">
  <strong>Veritone Voice</strong>
</a>.</p>

<p>Another tool that I have not been able to test is <strong>
  <a href="https://crimsontech.jp/apps/voidol3/?lang=en" target="_blank" rel="noreferrer noopener">Voidol 3</a>
</strong>, at a cost of about $300. I couldn&#8217;t find a demo version, but I admit I didn&#8217;t even try that hard to request one. It is one of several Oriental software created because of the typical Japanese passion for the anime world. But actually adaptable to our purpose, as we will see in a moment with another Japanese.</p>

<h3 class="wp-block-heading">MetaVoice Live, Mangio RVC Fork and W-Okada Voice Changer, free and open-source</h3>

<p><strong>The final solutions</strong>, after much research, are. <strong>
  <a href="https://github.com/metavoicexyz/MetaVoiceLive/releases" target="_blank" rel="noreferrer noopener">MetaVoice Live</a>
</strong> and the <a href="https://github.com/w-okada/voice-changer" target="_blank" rel="noreferrer noopener">
  <strong>Voice</strong>
</a><a href="https://github.com/w-okada/voice-changer"><strong> Changer</strong> from W-Okada</a>. Both <strong>open source</strong>, which reassures us that we can base the &#8220;future life&#8221; of the characters on these services, and both <strong>free</strong>. I cannot fail to mention the YouTube channel <a href="https://www.youtube.com/@ai-tools-search" target="_blank" rel="noreferrer noopener">
  <em>AI Tools Search</em>
</a> which has been most useful to me. Among the most interesting videos in this area, certainly this one:</p>

<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="Make UNLIMITED AI Voice Conversions, Training, &amp; Covers for FREE: RVCv2 Installation &amp; Tutorial" width="678" height="381" src="https://www.youtube.com/embed/ixB9oalT3cQ?start=475&#038;feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
</div></figure>

<p>I particularly like <em>MetaVoice</em>: it is under heated development and has a cloud version for non-real-time conversions that provides greater quality and flexibility. This one is not free, but the cost between $10 and $25 per month can all in all be dealt with.</p>

<p>In contrast, <em>W-Okada</em> &#8216;s <em>Voice Changer</em> has many independent developers developing solutions compatible with it, such as GitHub user <a href="https://github.com/Mangio621" target="_blank" rel="noreferrer noopener">
  <em>Mangio621</em>
</a> who created the <strong>
  <a href="https://github.com/Mangio621/Mangio-RVC-Fork/releases/tag/v23.7.0">Mangio RVC Fork</a>
</strong>, a software with Web interface that can transform the voice of a real actor into that of the chosen voice model. Many of these models, especially famous people, are already available on sites such as <a href="http://voice-models.com" target="_blank" rel="noreferrer noopener">Voice-Models.com</a> in &#8220;.pth&#8221; format. A classic format used in machine learning), but the best thing is that with the same <em>Mangio RVC</em> we can perform training, or training, of a custom voice. All locally thanks to a good video card. And therefore clearly always available, and for free.</p>

<p>I will do a specific article on this shortly, however. So you will follow me as I do some interesting tests.</p>

<h2 class="wp-block-heading">Generating video with artificial intelligence</h2>

<p>One use of AI in film production may be text-to-video, or video-to-video. But how useful can generating videos with artificial intelligence be? I mean right from scratch, describing to the AI in text (or with a very simplified video reference) what you want to achieve. Somewhat the future, to date tools like <a href="https://research.runwayml.com/gen1" target="_blank" rel="noreferrer noopener">
  <strong>Runway Gen-1</strong>
</a>, <a href="https://www.genmo.ai" target="_blank" rel="noreferrer noopener">
  <strong>Genmo AI</strong>
</a>, o <a href="https://moonvalley.ai">
  <strong>Moonvalley AI</strong>
</a> are little more than experiments. Useful at some junctures, but far from the quality and realism needed for a film production.</p>

<p>Evidently, we will have to work for a few more years to get our films <img src="https://s.w.org/images/core/emoji/15.0.3/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>

<h2 class="wp-block-heading">Canon EOS R5C and Dual Fisheye lens for 3D fulldome video</h2>

<p>What about live action filming? In our journey, we are trying our hand at a very exciting combination of equipment. We always keep the goal in mind: <strong>little expense, much return</strong>. I had originally planned to shoot everything in full 360 degrees, also to make it already compatible in case of future virtual reality porting. But the cost became prohibitive, both for the camera (a professional camera like the <em>
  <a href="https://www.insta360.com/it/product/insta360-titan">Insta 360 Titan</a>
</em>, which by the way has not been updated for years, costs more than 17,000 euros), and for the difficulties of starting from the very beginning with such a wide field.  </p>
<div class="wp-block-image">
<figure class="alignright size-medium"><img loading="lazy" decoding="async" width="277" height="300" src="https://www.darioriccio.com/wp-content/uploads/2023/11/canondualfisheye52-277x300.jpg" alt="" class="wp-image-897" srcset="https://www.darioriccio.com/wp-content/uploads/2023/11/canondualfisheye52-277x300.jpg 277w, https://www.darioriccio.com/wp-content/uploads/2023/11/canondualfisheye52-768x831.jpg 768w, https://www.darioriccio.com/wp-content/uploads/2023/11/canondualfisheye52.jpg 906w" sizes="(max-width: 277px) 100vw, 277px" /></figure></div>
<p>So the idea is to produce the right video for the dome, in <strong>360&#215;180 degrees</strong> (basically half the sphere). And the lens <em>
  <a href="https://www.canon.it/lenses/rf-5-2mm-f2-8l-dual-fisheye-lens/" target="_blank" rel="noreferrer noopener">Canon RF 5.2mm F2.8 L Dual Fisheye</a>
</em>, paired with the <em>
  <a href="https://www.canon.it/cameras/eos-r5c/" target="_blank" rel="noreferrer noopener">Canon EOS R5C</a>
</em>, should prove to be a winning choice. This setup not only provides immersive images, but also allows us to experiment with stereoscopic shots, which add an extra level of depth and realism.</p>

<p>The Canon EOS R5C camera, with its dynamic range between 11 and 12 stops and 8K sensor, offers excellent value for money. This is an important consideration for us, as we are trying to maintain a balanced budget without sacrificing quality. We will also consider whether to rent it &#8230; From <a href="https://www.adcom.it" target="_blank" rel="noreferrer noopener">
  <em>Adcom</em>
</a>, lens and camera are offered at about 200 euros daily.</p>

<p>To be honest, we can&#8217;t fully exploit the potential of stereoscopy yet because of projection, but it is definitely something we would like to explore in the future. It is one of those things that looks really cool and could add a special touch to our project. Have you ever projected stereoscopic fulldomes or do you have any suggestions on how we could integrate it into our work? I would be happy to hear your thoughts and ideas.</p>

<p>And if it will be lacking in anything compared to more emblazoned rooms&#8230;. Again, we count on leveraging AI for video quality improvement. But research, in the field, still needs to be done.</p>

<h2 class="wp-block-heading">Conclusions</h2>

<p>In the end, our choice of digital tools and equipment reflects a desire not to compromise on quality while keeping an eye on the budget.</p>

<p>In short, we are creating something beyond traditional cinema. Thanks to artificial intelligence and cutting-edge technology, the &#8220;cinema of the future&#8221; is no longer a dream. It is real, we want to see it happen. And with the genius of <strong>
  <em>
    <a href="https://www.linkedin.com/in/gbernasconi" target="_blank" rel="noreferrer noopener">Gérard Bernasconi</a>
  </em>
</strong> to the screenplay (who also gave us a great technical contribution), the precision of <strong>
  <a href="https://www.linkedin.com/in/michela-sette-613009198/" target="_blank" rel="noreferrer noopener">
    <em>Michela Sette</em>
  </a>
</strong> in the role of VFX Supervisor and the creativity of <strong>
  <em>
    <a href="https://www.linkedin.com/in/michele-pelosio-4a455b112/" target="_blank" rel="noreferrer noopener">Michele Pelosio</a>
  </em>
</strong> as director, we are forging a revolutionary cinematic experience. Get ready, because we are about to take you to a world where cinema and reality merge into a transcendental experience <img src="https://s.w.org/images/core/emoji/15.0.3/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p>L'articolo <a href="https://www.darioriccio.com/en/ai-in-film-production-how-to-produce-an-innovative-low-budget-film/">AI in Film Production: How to Produce an Innovative Low-Budget Film</a> è apparso primo per <a href="https://www.darioriccio.com/en/">Future of Cinema - by Dario Riccio</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.darioriccio.com/en/ai-in-film-production-how-to-produce-an-innovative-low-budget-film/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>How to make low-budget films with Artificial Intelligence &#8211; Early stages</title>
		<link>https://www.darioriccio.com/en/how-to-make-low-budget-films-with-artificial-intelligence-early-stages/</link>
					<comments>https://www.darioriccio.com/en/how-to-make-low-budget-films-with-artificial-intelligence-early-stages/#comments</comments>
		
		<dc:creator><![CDATA[Dario Riccio]]></dc:creator>
		<pubDate>Thu, 16 Feb 2023 17:13:23 +0000</pubDate>
				<category><![CDATA[Cinema]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[cinema]]></category>
		<guid isPermaLink="false">https://www.darioriccio.com/how-to-make-low-budget-films-with-artificial-intelligence-early-stages/</guid>

					<description><![CDATA[<p>Discover how to use the latest artificial intelligence tools to write great low-budget screenplays. Get tips, suggestions, and strategies from experts to make your next film a success.</p>
<p>L'articolo <a href="https://www.darioriccio.com/en/how-to-make-low-budget-films-with-artificial-intelligence-early-stages/">How to make low-budget films with Artificial Intelligence &#8211; Early stages</a> è apparso primo per <a href="https://www.darioriccio.com/en/">Future of Cinema - by Dario Riccio</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>I start this article (which I will divide into multiple parts) on the day of the Immaculate Conception: December 8th. I want to talk about the current state of artificial intelligence to assist with low-budget filmmaking. It will require some time, especially to try out various services, as I don&#8217;t want to make the usual sterile list. Also, because it will help us produce content for the films we will make in the near future.</p>

<p>I got the idea, albeit unintentionally, from <a href="https://www.linkedin.com/in/nicolasperrier/" target="_blank" rel="noreferrer noopener">Nicolas Perrier</a> from the University of Lyon in France, with one of his <a href="https://www.linkedin.com/feed/update/urn:li:activity:7006515451774001152/" target="_blank" rel="noreferrer noopener">posts on LinkedIn</a>. Perrier is a skilled expert in innovation in augmented and virtual reality, and the post in question is about <a href="https://plask.ai" target="_blank" rel="noreferrer noopener">Plask</a>; one of the many tools for creating 3D animations starting from a simple video. In practice, it&#8217;s Motion Capture without expensive and complex equipment. This technology may be of particular interest to us to produce animated videos at a fraction of the current cost, and even with actors remotely.</p>

<figure class="wp-block-video"><video controls="" src="https://www.darioriccio.com/wp-content/uploads/2022/12/Hero.mp4"></video><figcaption class="wp-element-caption">Plask example video</figcaption></figure>

<p>I&#8217;m also getting additional help, not just from Nicolas himself with his countless posts, but also from the website <a href="https://www.futurepedia.io" target="_blank" rel="noreferrer noopener">Futurepedia.io</a>. It&#8217;s an &#8220;AI wiki&#8221;, featuring a selection of many tools currently available to the general public.</p>

<p>Let&#8217;s analyze some of these tools, specifically the ones that are useful in filmmaking. Both for writing and for technical production of videos, as well as for voices. We&#8217;ll evaluate the quality of the results, conduct experiments, and learn about their costs.</p>

<p>To better understand how to use them in our low-budget films, I decided to create a short film (with very low expectations, just for technical experimentation) using them as much as possible.</p>

<h2 class="wp-block-heading">Making videos with artificial intelligence.</h2>

<p>Let&#8217;s divide the services into three main categories: <em>writing</em>, <em>video</em>, and <em>audio</em>. Starting with the writing, having to have the idea first.</p>

<h2 class="wp-block-heading">Film writing with artificial intelligence</h2>

<p>We need a story. Created by an AI? Let&#8217;s see, writing tools are not lacking. And if you don&#8217;t agree, you can always argue in the comments.</p>

<h2 class="wp-block-heading">How does GPT-3 work?</h2>

<p>Most public AI writing services are currently based on <strong>GPT-3</strong>, which has 175 billion machine learning parameters. The alternatives are actually many: <strong><a href="https://huggingface.co/bigscience/bloom" target="_blank" rel="noreferrer noopener">BigScience Bloom</a></strong>, a large-scale language that has recently been launched (<em>with the advantage of being open source</em>), or the German <em>Aleph Alpha</em> with its <strong>Luminous</strong> (with 200 billion parameters).</p>

<h3 class="wp-block-heading">What are the parameters of an artificial intelligence?</h3>

<p>Imagine having a task that requires predicting whether an image contains a cat or not. A machine learning model could be trained on many images labeled as &#8220;cat&#8221; or &#8220;not cat&#8221; to learn to recognize the distinctive features of cats.</p>

<p>To do this, the model uses a neural network, which is a set of interconnected nodes. Each node represents a computation performed on the input data. <strong>The parameters, or &#8220;weights,&#8221; are the values assigned to each node that affect the strength of the connection between nodes</strong>.</p>

<p>When the model sees a new image, these weights are used to perform a series of calculations that ultimately produce a prediction, such as &#8220;cat&#8221; or &#8220;not cat&#8221;. The number of weights used in the model is the number of parameters.</p>

<p>In summary, parameters are like &#8220;ajuste values&#8221; that influence how a machine learning model processes data to make a prediction, and are modified during training to improve the model&#8217;s performance.</p>

<p>A model with many parameters will have more opportunities to adjust its calculations to fit the training data, but it may also be more prone to <em>overfitting</em>, meaning an excessive adherence to the training data and a poor ability to generalize.</p>

<h3 class="wp-block-heading">Other competitors of ChatGPT.</h3>

<p>Other competitors are <strong>Nvidia</strong> with the <em><a href="https://developer.nvidia.com/megatron-turing-natural-language-generation" target="_blank" rel="noreferrer noopener">Megatron NGL</a></em> (huge, but not available to the public ), <em>Google</em> with their <strong><a href="https://ai.googleblog.com/2021/12/more-efficient-in-context-learning-with.html" target="_blank" rel="noreferrer noopener">GLaM</a></strong>, <strong><a href="https://blog.google/technology/ai/lamda/" target="_blank" rel="noreferrer noopener">LaMDA</a></strong> and <strong><a href="https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html" target="_blank" rel="noreferrer noopener">PaLM</a></strong> (yes, they don&#8217;t leave anything out), <strong><a href="https://ai.facebook.com/blog/democratizing-access-to-large-scale-language-models-with-opt-175b/" target="_blank" rel="noreferrer noopener">OPT</a></strong> by <em>Meta</em>, <strong><a href="https://github.com/amazon-science/alexa-teacher-models" target="_blank" rel="noreferrer noopener">AlexaTM</a></strong> by <em>Amazon</em> or various Chinese ones like <em>Alibaba</em> and its <strong><a href="https://dl.acm.org/doi/abs/10.1145/3503221.3508417" target="_blank" rel="noreferrer noopener">BaGuaLu</a></strong>.</p>

<p>To conclude, I mention <strong><a href="https://6b.eleuther.ai" target="_blank" rel="noreferrer noopener">GPT-J</a></strong>, another open source alternative to GPT-3.</p>

<p>Let&#8217;s quickly understand how GPT-3 works, using a rephrased response from the AI itself:</p>

<p><em>GPT-3 (Generative Pretrained Transformer 3) is a state-of-the-art language model developed by OpenAI. Utilizes a deep learning technique called </em>&#8220;<em>transformers</em>&#8220;<em>to generate human-like text based on a given input.</em></p>

<p><em>Here&#8217;s how GPT-3 works:</em></p>

<ol class="wp-block-list">
<li><em>The model is trained on a large set of texts generated by humans, such as books, articles, and websites. This training process allows the model to learn patterns and structures of natural language.</em></li>



<li><em>When it receives an input, the model processes it using its layers of neural networks. When it receives an input, the model processes it using its layers of neural networks. Each layer performs a different task, such as analyzing the syntax, semantics, and context of the input.</em></li>



<li><em>The model then generates a response based on the input and its understanding of natural language. It does so by predicting the next words in the sentence, using the patterns and structures learned during training.</em></li>



<li><em>The model repeats this process several times, generating multiple responses to the input. These responses are then ranked and evaluated by the model, and the highest-scoring response is selected as the final output.</em></li>
</ol>

<p><em>GPT-3 is known for its ability to generate text that is often difficult to distinguish from text written by a human being. It is used in various applications, such as language translation, text summarization, and text generation.</em></p>

<h3 class="wp-block-heading">ChatGPT can tell stories for screenplay.</h3>

<p>Let&#8217;s start directly with OpenAI&#8217;s product: <strong>ChatGPT</strong>. It is perhaps the tool of the moment; the first one to have made a leap towards the general public. It&#8217;s almost concerning how accurate and &#8220;real&#8221; it can be in conversations, but it has also been discussed too much and everywhere in the last few weeks.</p>

<p>I signed up for the OpenAI website and opened the chat to try talking to GPT-3 (I can&#8217;t imagine what GPT-4 will do next) by asking it to write me a story with some dialogues for a short digital graphic film. And well, it simply made it for me.</p>

<h4 class="wp-block-heading">The story written by ChatGPT</h4>

<p>This is the story as written by ChatGPT, with no modifications, translated into Italian (you can read the original in the English version of this article).</p>
<div class="lightweight-accordion"><details><summary class="lightweight-accordion-title"><span>Title: The Lost City (click here to open)</span></summary><div class="lightweight-accordion-body">

<p>As the sun rose on the horizon, a small group of adventurers set out to find the lost city of gold. For months, they had been searching, following old maps and ancient legends in the hopes of striking it rich.</p>



<p>&#8220;Are you sure this is the right way?&#8221; asked <strong>Maria</strong>, the leader of the group. &#8220;I don&#8217;t see any sign of the city on the horizon&#8221;.</p>



<p>&#8220;Trust me, Maria,&#8221; replied <strong>Jake</strong>, the group&#8217;s ancient history expert. &#8220;The city is hidden in the depths of the jungle and it won&#8217;t be easy to find. But if we stick to the plan, we will find it&#8221;.</p>



<p>The group ventured into the dense jungle, hacking through the vines and undergrowth with their machetes. After several hours of grueling travel, they reached the edge of a clearing.</p>



<p>&#8220;Look!&#8221; <strong>Sarah</strong>, the youngest member of the group, shouted. &#8220;Down there! I see something shining in the sunlight!&#8221;.</p>



<p>The group rushed to the spot where Sarah had seen the glimmer, and sure enough, they found a small golden statue buried in the ground.</p>



<p>&#8220;This must be it!&#8221; exclaimed <strong>Maria</strong>. &#8220;We&#8217;ve found the lost city of gold!&#8221;.</p>



<p>The group started to dig, uncovering more and more treasures. They found gold coins, precious jewels, and ancient artifacts of great value.</p>



<p>&#8220;We&#8217;re rich!&#8221; exclaimed <strong>Jake</strong>, laughing with excitement. &#8220;We&#8217;ve found a treasure beyond our wildest dreams!&#8221;</p>



<p>But as they continued to dig, the ground beneath their feet began to rumble and shake. Suddenly, a huge stone door opened, revealing a hidden entrance to the lost city.</p>



<p>&#8220;Quick, everyone inside!&#8221; shouted <strong>Maria</strong>, leading the group into the dark tunnel. &#8220;We must find the source of the treasure before it is too late!&#8221;</p>



<p>The group went into the tunnel, lit by flashlights. They reached a large open chamber filled with treasures.</p>



<p>&#8220;Here we go!&#8221; shouted <strong>Sarah</strong>. &#8220;The Lost City of Gold!&#8221;</p>



<p>But as they explored the chamber, they heard a loud rumble. Suddenly, the ground beneath their feet gave way and they fell into a deep pit.</p>



<p>&#8220;Help!&#8221; cried <strong>Maria</strong>, as they fell into darkness. &#8220;We are trapped!&#8221;</p>



<p>But just when they thought all was lost, they heard a faint voice calling to them.</p>



<p>&#8220;Don&#8217;t worry, adventurers,&#8221; said <strong>the voice</strong>. &#8220;I am the guardian of the lost city and I will help you escape.&#8221;</p>



<p>With the help of the watchman, the group managed to get out of the pit and back to the surface. They came out of the jungle, triumphant and rich beyond expectation.</p>



<p>&#8220;We did it!&#8221; cried <strong>Jake</strong>, laughing with joy. &#8220;We have found the lost city of gold!&#8221; </p>

</div></details></div>
<p>Let us consider this first story. Of course, he didn&#8217;t make it easy for me&#8230;. He could create a story in the middle of a random city; in contrast, the jungle will be more complex to recreate. But I have to be in it, that&#8217;s the game.</p>

<h3 class="wp-block-heading">The same story, in a short version</h3>

<p>Oh, I also asked for a shorter version. He gave me a summary. Then I specified that I wanted a short version but with dialogue, and he simply did it. I am reporting it to you for completeness:</p>
<div class="lightweight-accordion"><details><summary class="lightweight-accordion-title"><span>The Lost City (short version)</span></summary><div class="lightweight-accordion-body">

<p>As the group of adventurers searched for the lost city of gold, they came across a hidden entrance in the jungle.</p>



<p>&#8220;Look, over there!&#8221; said <strong>Sarah</strong>, pointing to the entrance.</p>



<p>The group rushed to the entrance and explored the city, finding treasure beyond their wildest dreams.</p>



<p>&#8220;We are rich!&#8221; shouted <strong>Jake</strong>.</p>



<p>But as they continued to search, the ground beneath them gave way and they fell into a pit.</p>



<p>&#8220;Help!&#8221; cried <strong>Maria</strong>.</p>



<p>A city watchman helped them escape and they emerged victorious and wealthy.</p>



<p>&#8220;We made it!&#8221; cried <strong>Jake</strong>.</p>

</div></details></div>
<h3 class="wp-block-heading">ChatGPT automatically writes the script</h3>

<p>I was considering collecting it into a screenplay, when I thought, &#8220;Why not ask him?&#8221; Said and done. He rewrote the story for me already ready for film:</p>
<div class="lightweight-accordion"><details><summary class="lightweight-accordion-title"><span>Screenplay for &#8220;The Lost City&#8221;</span></summary><div class="lightweight-accordion-body">

<p>INT. JUNGLE &#8211; DAY</p>



<p>The sun rises on the horizon as a group of adventurers, MARIA, JAKE and SARAH, set out to find the lost city of gold.</p>



<p>MARIA<br/>Are you sure this is the right way? Non vedo alcun segno della città all&#8217;orizzonte.</p>



<p>JAKE<br/>Trust me, Maria. La città è nascosta nel profondo della giungla e non sarà facile trovarla. But if we stick to the plan, we will find it.</p>



<p>The group goes into the dense jungle, cutting through the lianas and undergrowth with their machetes.</p>



<p>SARAH<br/>Look! There! I see something shimmering in the sunlight!</p>



<p>The group rushes to the spot where Sarah saw the glitter and finds a small golden statue buried in the earth.</p>



<p>MARIA<br/>This must be it! We have found the lost city of gold!</p>



<p>The group starts digging, discovering more and more treasures.</p>



<p>JAKE<br/>We are rich! We have found a treasure beyond our wildest dreams!</p>



<p>But as they continue to dig, the ground beneath their feet begins to rumble and shake. Suddenly, a huge stone door opens, revealing a hidden entrance to the lost city.</p>



<p>MARIA<br/>Quick, everyone inside! We must find the source of the treasure before it is too late!</p>



<p>The group enters the tunnel, lit by flashlights. They reach a large open chamber filled with treasures.</p>



<p>SARAH<br/>That&#8217;s it! The lost city of gold!</p>



<p>But as they explore the chamber, they hear a loud rumble. Suddenly, the ground beneath their feet gives way and they fall into a deep pit.</p>



<p>MARIA<br/>Help! We are in a trap!</p>



<p>But just when they think all is lost, they hear a faint voice calling to them.</p>



<p>GUARD (off-camera)<br/>Don&#8217;t worry, adventurers. I am the guardian of the lost city and I will help you escape.</p>



<p>With the help of the watchman, the group manages to get out of the pit and back to the surface. They re-emerge from the jungle, triumphant and rich beyond expectation.</p>



<p>JAKE<br/>We made it! We have found the lost city of gold!</p>

</div></details></div>
<p>Apart from <em>Inside Jungle</em>, it is otherwise a good starting point. But some form errors jumped out at <em>Andrea Rocci</em>, an &#8220;human&#8221; screenwriter, albeit without an in-depth analysis (we were chatting over a beer).</p>

<p>In fact, the script speaks in pictures, and phrases such as &#8220;discovering more and more treasures,&#8221; or &#8220;rich beyond expectation&#8221; are not at all clear. Which treasures? Statues, coins, anything else? And, what do you mean by rich? Are they filled with gold on them? Are they dressed flamboyantly? Everyone with the latest iPhone and the keys to a Ferrari?</p>

<p>Not to mention the lack of descriptions of environments. The jungle itself is left to the fullest imagination of the director, or set designers (or 3D artists, whatever).</p>

<p>However, we must make a virtue of necessity; we will leave any choice at the discretion of the director (if he existed, at least&#8230;). We will try to find a good one on <a href="https://www.character.ai" target="_blank" rel="noreferrer noopener"><strong>character.ai</strong></a>, perhaps. In fact, try it out and talk to artificial &#8220;characters.&#8221; Even <em>Albert Einstein</em> is there!</p>

<h4 class="wp-block-heading">Prices</h4>

<p>Here it&#8217;s simple: <strong>it doesn&#8217;t cost anything</strong> basically. A $20/month version is starting to be marketed in some countries, which removes some limitations (mainly due to the computing power needed to handle the millions of requests coming into OpenAI each day).</p>

<h2 class="wp-block-heading">Alternatives to ChatGPT</h2>

<p>At present GPT-3 is hard to beat&#8230; While waiting for the most emblazoned candidates to come out (<em><a href="https://blog.google/technology/ai/bard-google-ai-search-updates/" target="_blank" rel="noreferrer noopener">Google Bard</a></em> soon), I asked ChatGPT itself about its competitors. He pointed me to <em>ScriptBuddy</em>, <em>WriterDuet</em> and <em>AI Screenwriter</em> to begin with. Asking for more, Plotbot, Amazon Storywriter, and InkTip Script Listing. Okay, I thought that was enough&#8230; Except that the answer is actually a partial lie. Here we see the current limitations of this artificial intelligence, in part (but not only) due to the fact that the data it has is up to 2021.</p>

<p>Meanwhile, <em>ScriptBuddy</em>, originally based on GPT-2, has not been maintained for more than 3 years; we exclude it. <em>WriterDuet</em>, on the other hand, is most useful in script arrangement. But he doesn&#8217;t write anything himself&#8230; <em>AI Screenwriter</em> I don&#8217;t think even exists, at least dear old Google doesn&#8217;t give me any results.</p>

<p>As for Plotbot, or rather <em><a href="https://github.com/rasbot/plot-bot" target="_blank" rel="noreferrer noopener">Plot-bot</a></em>, it actually generates movie plots by exploiting the GPT-2 engine. But, like ScriptBuddy, it has not been maintained for years, and to optimize time, I rule it out. The same is true for <em><a href="https://en.wikipedia.org/wiki/Amazon_Storywriter" target="_blank" rel="noreferrer noopener">Amazon StoryWriter</a></em>, while InkTip Script Listing is not for generating text.</p>

<h3 class="wp-block-heading">Dramatron</h3>

<p><a href="https://deepmind.github.io/dramatron/" target="_blank" rel="noreferrer noopener"><strong>Dramatron</strong></a> is a new alternative, branded <em>Deepmind</em>, that promises exactly what we are looking for: writing screenplays for film and theater. Unfortunately, it was only released a couple of days around December 9, and by the time I got to try it, it was too late.</p>

<p>It remains an interesting project to keep in mind when it is opened. For the time being, <em><a href="https://www.youtube.com/@georgemike-com" target="_blank" rel="noreferrer noopener">George Mike</a></em>, author of this YouTube video, does not like it.</p>

<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-4-3 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="DeepMind&#039;s Dramatron is very not good. [Demo]" width="678" height="509" src="https://www.youtube.com/embed/A1qGOfbGQcA?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
</div><figcaption class="wp-element-caption">Dramatron test</figcaption></figure>

<p>His comment? &#8220;<em>&#8230;overall, it seemed insipid to me. And it didn&#8217;t give me much more than the dialogue that you find in all the movies. He clearly grasped the genre, but did not explore it in an interesting way. It did not advance the narrative</em>&#8220;.</p>

<h3 class="wp-block-heading"><strong>Rytr</strong></h3>

<p>Through <em>Google</em> and <em>Futurepedia</em> I then came to <a href="https://rytr.me" target="_blank" rel="noreferrer noopener"><em>Rytr</em></a>. A nice tool. Account and login, and I find myself in the work area. He basically asks for language, style and use case of the created text, and he acts. Since there was clearly no &#8220;film script,&#8221; I tried meanwhile with a tool called a &#8220;magic wand.&#8221;</p>

<p>Same request made to ChatGPT, however, the response was almost hilarious:</p>
<div class="lightweight-accordion"><details><summary class="lightweight-accordion-title"><span>Story for a short film</span></summary><div class="lightweight-accordion-body">

<p>In the name of Allah, the Compassionate, the Merciful</p>



<p>I believe</p>



<p>I believe in Allah, the One God</p>



<p>I believe in prophets and messengers</p>



<p>I believe in angels and scripture</p>



<p>I believe in the books of revelation</p>



<p>I believe in life after death and that on judgment day justice will be done.</p>



<ul class="wp-block-list">
<li>Sorry, what? You are saying &#8220;I believe&#8221; everything.</li>
</ul>



<p>I don&#8217;t have time for that.</p>



<ul class="wp-block-list">
<li>I&#8217;m sorry, I don&#8217;t follow you.</li>



<li>Why are you apologizing?</li>



<li>What?</li>
</ul>

</div></details></div>
<p>Let&#8217;s try something different and more specific&#8230; I asked &#8220;I need a script of about 2000 words, with dialogue, to make a CG short film. The title is: the lost city&#8221;, and set as a use case &#8220;story plot.&#8221; Prompt based on the previous ChatGPT output. But we are still a long way off, although we have a basic storyline:</p>
<div class="lightweight-accordion"><details><summary class="lightweight-accordion-title"><span>The lost city</span></summary><div class="lightweight-accordion-body">

<p>In the near future, a crew of archaeologists and scientists on an exploratory mission to an unknown planet are captured and imprisoned by an ancient alien civilization.</p>



<p>The team is somewhat lost because of the language barrier and interactions with the aliens. They are kept in captivity for decades, exploring the ruins and discovering all the secrets of civilization.</p>



<p>They manage to return home, at the end. Scientists have learned many new things and are bringing them back to Earth to share with humans. However, they took so long that by now not only has the Earth changed, but there is no more time for these discoveries.</p>

</div></details></div>
<p>Same request, but with the magic command as the use case:</p>
<div class="lightweight-accordion"><details><summary class="lightweight-accordion-title"><span>The lost city</span></summary><div class="lightweight-accordion-body">

<p>INT. LIVING ROOM &#8211; AFTERNOON</p>



<p>The room is well furnished and looks very comfortable. There is a door that leads to another room.</p>



<p>JOE: Good morning, John.</p>



<p>Morning, Joe. How are you this morning?</p>



<p>JOE: I&#8217;m fine, thank you for asking. Do you?</p>



<p>JOHN: I&#8217;m just fine! What&#8217;s on the agenda for today?</p>



<p>JOE: I&#8217;m going to catch up on my reading and then probably take a walk around the neighborhood. Do you have any programs</p>

</div></details></div>
<p>Okay, a useful tool for short texts but still far from total autonomy. We come to prices, although in our case I maintain ChatGPT&#8217;s text.</p>

<h4 class="wp-block-heading">Prices</h4>

<p>Rytr has a completely <strong>free</strong> version, limited to 10000 characters per month. Otherwise it costs <strong>$9 per month</strong> for 100,000 characters, or <strong>$29 per month</strong> for unlimited use.</p>

<h3 class="wp-block-heading">Shortly AI</h3>

<p>Another tool is <strong><a href="https://www.shortlyai.com" target="_blank" rel="noreferrer noopener">Shortly AI</a></strong>, primarily designed to combat &#8220;writer&#8217;s block&#8221;. Write your text or a sentence, and he continues.</p>

<p>Since I could not ask him to do the story for me, I used the first sentence of the story generated by ChatGPT: <em>Quando il sole sorse all&#8217;orizzonte, un piccolo gruppo di avventurieri si mise in viaggio per trovare la città d&#8217;oro perduta.</em> Although based on the same language model, the continuation was clearly different. </p>

<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="682" src="https://www.darioriccio.com/wp-content/uploads/2022/12/Screenshot-2022-12-10-alle-10.00.55-1024x682.jpg" alt="" class="wp-image-789" srcset="https://www.darioriccio.com/wp-content/uploads/2022/12/Screenshot-2022-12-10-alle-10.00.55-1024x682.jpg 1024w, https://www.darioriccio.com/wp-content/uploads/2022/12/Screenshot-2022-12-10-alle-10.00.55-300x200.jpg 300w, https://www.darioriccio.com/wp-content/uploads/2022/12/Screenshot-2022-12-10-alle-10.00.55-768x512.jpg 768w, https://www.darioriccio.com/wp-content/uploads/2022/12/Screenshot-2022-12-10-alle-10.00.55-1536x1023.jpg 1536w, https://www.darioriccio.com/wp-content/uploads/2022/12/Screenshot-2022-12-10-alle-10.00.55-2048x1364.jpg 2048w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>

<p>This is the final text:</p>
<div class="lightweight-accordion"><details><summary class="lightweight-accordion-title"><span>The lost city</span></summary><div class="lightweight-accordion-body">

<p>As the sun rose on the horizon, a small group of adventurers set out to find the lost city of gold. It was said that there were more than a thousand tons of gold in the lost city, but like all legends, the story of the treasure is told only to those who are worthy.</p>



<p>They had been on the road for two weeks and hoped to reach their destination by the end of the week. So far the road had been smooth, but when they approached a hill, there was a small construction on the road, which made the wagons slow down. The caravan had two large wagons carrying gold, while the rest were on horseback. The rest of the group was not much concerned about the delay, apart from the leader. To contain large amounts of gold, silver, and jewelry dating back to the ancient Aztecs. A lost city that was protected by untold numbers of demons and ghosts. It was said that not one man or beast who entered the cursed city returned.</p>



<p>The small convoy headed to the place indicated by the employer. They went over the details of the contract one last time. The six adventurers had divided into three groups of two. Each group had positioned itself in a location. The city was hidden in the depths of a mountain range at the center of which was a well-protected fortress left by the Aztec empire. The well was said to be a gateway to the lost city. Inside the fortress was to be a statue of an angel. It was a special stone tablet with an incantation that would concentrate the power of the jewel on the eighth son of an Aztec king.</p>



<p>Upon hearing of the strange fortress, the Aztec leader marched his army to attack it, but mysteriously one of the generals and his army were taken away to another kingdom and disappeared. The king then ordered the retreat. This was the beginning of problems. As they were leaving, they were attacked by large monsters that were thought to have been killed, but then were seen on top of the fortress. Battles continued throughout the following week, with the Aztecs defeated.</p>



<p>At the end, the king ordered the remaining troops to return to the city, saying that the previous week&#8217;s battles had awakened some ancient demons responsible for the disappearance of not only his son but also some of his generals and the remaining troops. The king had outlawed all kinds of magic and spells in his city, and everyone was to be executed. One young man was helped by his mother, also a sorceress, once she was discovered.</p>

</div></details></div>
<p>Ultimately, a good story but with little meaning. The AI took bits of text from here and there, but without connecting them in an &#8220;intelligent&#8221; way.</p>

<h4 class="wp-block-heading">Prices</h4>

<p>Shortly AI is certainly useful for adding a sentence or two and overcoming writer&#8217;s block, but to let it do its own thing does not seem the best tool.</p>

<p>Nor is its cost negligible. We are talking about <strong>$79 per month</strong>, or <strong>$780 per year</strong>. I still used the free trial, so it is only required to register an account.</p>

<h3 class="wp-block-heading">WriterX</h3>

<p>Let&#8217;s look at <a href="https://writerx.co" target="_blank" rel="noreferrer noopener"><strong>WriterX</strong></a>. It has templates for creating articles, social posts, text conclusions and many others. I expect similar results to Shortly, not being designed for whole stories.</p>

<p>Unfortunately, however, the free trial version does not allow access to all features, and I didn&#8217;t feel like paying for a service (even a fairly expensive one) without first trying it out; even having a free alternative that I am partially satisfied with.</p>

<p>Although I am evaluating it for its useful marketing features&#8230; Like the bio of my social channels, or any future help in writing blog articles. I also used it to write the meta description of this same article.</p>

<h4 class="wp-block-heading">Prices</h4>

<p>WriterX costs <strong>$29 a month</strong> in the standard version (basically the trial I had), or <strong>$59</strong> <strong>a month</strong> to have unlimited text and functions. It is available in 25 languages.</p>

<h3 class="wp-block-heading">Jasper AI</h3>

<p>I also wanted to try <strong><a href="https://www.jasper.ai" target="_blank" rel="noreferrer noopener">Jasper AI</a></strong>, which is ultimately a graphical user interface for GPT-3 itself. It is perhaps the most publicized, found everywhere. But I simply haven&#8217;t even begun to use it: it forces you to enter your credit card even for the free version, and to verify it takes not a few cents but the entire fi<strong>rst month: $29</strong>. Unprofessional attitude, so I don&#8217;t want to deal with them and wanted my money back immediately.</p>

<h3 class="wp-block-heading">GPT-J and Writey AI</h3>

<p>To get out of the GPT-3 universe, I wanted to try the open source <strong>GPT-J</strong> via the <a href="https://6b.eleuther.ai" target="_blank" rel="noreferrer noopener">6b.eleuther.ai</a> website; however, it always crashed with the message, &#8220;Unable to connect to the model. Please try again&#8221;. And <strong><a href="https://writey.ai" target="_blank" rel="noreferrer noopener">Writey AI</a></strong>, also well-functioning but too specialized in writing blog articles. Which I recommend you check it out for, if only because of 5 articles a month totally free.</p>

<p>Ultimately, I am tired and any further research seems futile. After all, ChatGPT&#8217;s text is valid (if you can call it an &#8220;automatic&#8221; text), so I would say let&#8217;s move on to the technical realization of the short film.</p>

<h2 class="wp-block-heading">Creating 3D characters</h2>

<h3 class="wp-block-heading">Can an artificial intelligence generate 3D characters?</h3>

<p>So we need the characters for our story, but is it really possible to generate them with A.I.? <strong>Spoiler: today, not well</strong>. There are many promises and some solutions that come close to the result, but it is not yet possible. Let us look at them in brief, as they will be useful in the near future. But then we will go on to figure out how to have the characters in our story now with little money.</p>

<h3 class="wp-block-heading">PIFuHD</h3>

<p>To start, there is <a href="https://colab.research.google.com/drive/11z58bl3meSzo6kFqkahMa35G5jmh2Wgt#scrollTo=1TfPAtL4CyZw" target="_blank" rel="noreferrer noopener">PIFuHD</a>, which is already available to the public and promises to create a 3D character from a single photo. It works, but even from the presentation videos, a quality far from acceptable in the cinema is remarked upon.</p>

<h3 class="wp-block-heading">Google DreamFusion</h3>

<p><a href="https://dreamfusion3d.github.io" target="_blank" rel="noreferrer noopener"><strong>DreamFusion</strong></a> is one of the first A.I.s that can leverage 2D data from Stable Diffusion (the same data used to create photos using artificial intelligence, which is now widespread), to create 3D models.</p>

<p>Same problem as before: unsatisfactory quality even at first glance. </p>

<h3 class="wp-block-heading">Microsoft Rodin Diffusion</h3>

<p>A 2023 newcomer to Microsoft, <a href="https://3d-avatar-diffusion.microsoft.com" target="_blank" rel="noreferrer noopener">Rodin Diffusion</a> is not yet available to the public. It promises to create a realistic 3D avatar from a single photo. Well, judging from the photos released on their site the hair&#8230; Those are always the problem!</p>

<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="275" src="https://www.darioriccio.com/wp-content/uploads/2023/01/3d-avatar-diffusion-1-1024x275.jpg" alt="" class="wp-image-806" srcset="https://www.darioriccio.com/wp-content/uploads/2023/01/3d-avatar-diffusion-1-1024x275.jpg 1024w, https://www.darioriccio.com/wp-content/uploads/2023/01/3d-avatar-diffusion-1-300x80.jpg 300w, https://www.darioriccio.com/wp-content/uploads/2023/01/3d-avatar-diffusion-1-768x206.jpg 768w, https://www.darioriccio.com/wp-content/uploads/2023/01/3d-avatar-diffusion-1-1536x412.jpg 1536w, https://www.darioriccio.com/wp-content/uploads/2023/01/3d-avatar-diffusion-1-2048x549.jpg 2048w" sizes="(max-width: 1024px) 100vw, 1024px" /><figcaption class="wp-element-caption">Microsoft 3D Avatar Diffusion generative model</figcaption></figure>

<h3 class="wp-block-heading">Nvidia Get3D</h3>

<p>You certainly cannot miss the queen of graphics cards, Nvidia, among the 3D tools. And indeed its <a href="https://nv-tlabs.github.io/GET3D/" target="_blank" rel="noreferrer noopener"><strong>Get3D</strong></a> is superlative in creating 3D models from information learned from a dataset of 3D models.</p>

<p>The dataset is much more limited, and this is a disadvantage in the variety of 3D objects that can be realized. It is, however, open-source, a definite plus point, although the entire training must be done in one&#8217;s own system. This means high-end video cards galore&#8230; And they cost a little. Much.</p>

<h3 class="wp-block-heading">Nvidia Magic3D</h3>

<p>How did Nvidia solve the problem of the &#8220;limitation&#8221;, in terms of quantity, of objects that can be created by Get3D? As the article in <a href="https://the-decoder.com/nvidias-magic3d-turns-text-into-high-resolution-3d-objects/" target="_blank" rel="noreferrer noopener">The Decoder</a> explains, simply copying Google&#8230; And trying to make its new <strong><a href="https://deepimagination.cc/Magic3D/" target="_blank" rel="noreferrer noopener">Magic3D</a></strong> (<a href="https://arxiv.org/pdf/2211.10440.pdf" target="_blank" rel="noreferrer noopener">researchers&#8217; paper here</a>) faster and more defined than DreamFusion. This video explains well how it works:</p>

<figure class="wp-block-video"><video controls="" src="https://www.darioriccio.com/wp-content/uploads/2022/12/nvidia_magic3d.mp4"></video><figcaption class="wp-element-caption">Process of 3D object creation with Nvidia Magic3D from text</figcaption></figure>

<p>In practice, Get3D has a dataset based on other 3D models. Instead, Magic3D starts with images, themselves generated by A.I., paving the way for virtually infinite combinations. </p>

<p>Does it work well? What is certain is that it is not available to the public, but from the videos and examples on the Web it looks like a very promising technology. Although it still does not reach the necessary quality.</p>

<h3 class="wp-block-heading">StyleGAN-NADA</h3>

<p>For doing Pokemon or other fun things there is <strong><a href="https://stylegan-nada.github.io" target="_blank" rel="noreferrer noopener">StyleGAN-NADA</a></strong>, trained following OpenAI&#8217;s CLIP (Contrastive Language-Image Pre-Training) model. It allows you to create images from just a textual description, without the need to see any references and without the need to collect additional training data.</p>

<p>It is also possible to modify existing images to make them similar to those in other domains, for example, using an image of a dog to generate a cat. The same approach can be applied to other generative architectures, opening up interesting possibilities for creating images quickly and accurately.</p>

<h3 class="wp-block-heading">Pollinations</h3>

<p>Among the &#8220;next steps,&#8221; <strong><a href="https://pollinations.ai" target="_blank" rel="noreferrer noopener">Pollinations</a></strong> promises to do what we need it to do. From their website, &#8220;at the research level, our team is developing technology that allows people to generate 3D objects and avatars with the help of text alone&#8221;.</p>

<p>Here again, there is a wait. For now, it still allows you to do interesting things in the photo/video area. Maybe try it, however there is little of use for the purposes of this article.</p>

<h3 class="wp-block-heading">Text2mesh</h3>

<p>Small but interesting, <strong><a href="https://threedle.github.io/text2mesh/" target="_blank" rel="noreferrer noopener">Text2mesh</a></strong> is less of an exercise in style than its predecessors. Here you already have to have the model, but the AI promises to modify it independently; for example, increasing the number of polygons, changing its shape and color even creating the texture from scratch. All based on a text prompt, a written request. </p>

<p>Reminder to put in the diary in case we need it.</p>
<div class="wp-block-image">
<figure class="alignright size-medium"><img loading="lazy" decoding="async" width="111" height="300" src="https://www.darioriccio.com/wp-content/uploads/2023/02/lumaai-111x300.jpg" alt="" class="wp-image-828" srcset="https://www.darioriccio.com/wp-content/uploads/2023/02/lumaai-111x300.jpg 111w, https://www.darioriccio.com/wp-content/uploads/2023/02/lumaai.jpg 376w" sizes="(max-width: 111px) 100vw, 111px" /><figcaption class="wp-element-caption">Luma AI</figcaption></figure></div>
<h3 class="wp-block-heading">Luma AI</h3>

<p><a href="https://lumalabs.ai" target="_blank" rel="noreferrer noopener"><strong>Luma AI</strong></a> is an interesting project to scan real objects by recreating them in 3D. The operation is interesting, and the quality of the scans is reasonably good. For props or figures in the background, I consider it more than acceptable, even in production.</p>

<p>It has also recently allowed you to create objects, and thus characters, in 3D from a text prompt. With the classic &#8220;imagine&#8221; command, already made famous by the <em><a href="https://midjourney.com/home/?callbackUrl=%2Fapp%2F" target="_blank" rel="noreferrer noopener">MidJourney</a></em> image generator. But here, again, the quality is not sublime. Good experiments, but definitely not usable for production purposes. </p>

<h2 class="wp-block-heading">3D characters and objects without artificial intelligence</h2>

<p>From all of this we understood only one thing: A.I. as of today, January 2023, still does not allow us to have good 3D models. Since we have to keep the budget low, however, let&#8217;s achieve them with the tools already available.</p>

<h3 class="wp-block-heading">3D characters in our short film</h3>

<p>I won&#8217;t go into a lot of research here, but I evaluate two 3D character creation tools that I already know: the simple <a href="https://www.reallusion.com/character-creator/" target="_blank" rel="noreferrer noopener"><em>Reallusion</em> <strong>Character Creator</strong></a>, and <a href="https://www.unrealengine.com/en-US/metahuman" target="_blank" rel="noreferrer noopener"><em>Epic&#8217;s</em> fantastic <strong>MetaHuman</strong></a>.</p>

<p>We will need 3 characters for our story: <em>Maria</em>, <em>Jake</em>, and <em>Sarah</em>.</p>

<h3 class="wp-block-heading">MetaHuman Creator</h3>

<p>Actually, in the case of MetaHuman there is a change from my past: I used the beta of <strong>MetaHuman Creator</strong>. It is phenomenal in that it has given me the ability to take advantage of the computing power of Epic&#8217;s servers by creating characters in a work break directly with my laptop (which only needs to receive a video stream).</p>

<p>So by requesting &#8220;Early Access&#8221; with one&#8217;s Epic Games account from <a href="https://metahuman.unrealengine.com">metahuman.unrealengine.com</a>, we end up with a choice of possible characters. </p>

<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="570" src="https://www.darioriccio.com/wp-content/uploads/2022/12/metahuman-1024x570.jpg" alt="" class="wp-image-800" srcset="https://www.darioriccio.com/wp-content/uploads/2022/12/metahuman-1024x570.jpg 1024w, https://www.darioriccio.com/wp-content/uploads/2022/12/metahuman-300x167.jpg 300w, https://www.darioriccio.com/wp-content/uploads/2022/12/metahuman-768x427.jpg 768w, https://www.darioriccio.com/wp-content/uploads/2022/12/metahuman-1536x855.jpg 1536w, https://www.darioriccio.com/wp-content/uploads/2022/12/metahuman-2048x1139.jpg 2048w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>

<p>We imagine that all three are between the ages of 20 and 40, otherwise the script does not tell us much about them.</p>

<h4 class="wp-block-heading">Jake</h4>

<p>I choose to begin by selecting <em>Aoi</em>, as <strong>Jake</strong>. I don&#8217;t know, that beard gives me the idea of &#8220;adventurer.&#8221;</p>

<p>The software warns that some elements of the character has elements still under development (specifically hair) and therefore only LOD (level of detail) 0 (automatic) and 1 (highest quality) will be displayed. For us it is fine, the destination will be a pre-rendered video clearly at the highest quality and not a real-time video game.</p>

<p>A few changes to the character (shirtless, eye color, more &#8220;suitable&#8221; shoes and pants), and he is saved. Then we will export it with the Quixel Bridge plugin of Unreal Engine 5.</p>

<h4 class="wp-block-heading">Maria</h4>

<p>Let&#8217;s move on to <strong>Maria</strong>. I asked ChatGPT to come up with its characteristics, and the answer was that it could be a woman around 30 years old. Brown hair, shoulder-length and pulled back into a ponytail. Brown eyes, intense and deep, and of Latin ethnicity, with skin tanned from his outdoor adventures.</p>

<p>Let&#8217;s try to realize it. Let&#8217;s start from <em>Roux</em>. Let&#8217;s make a Blend with Lena, Kendra and Tori who seem suitable to modify her features and thus ethnicity a bit, give her a ponytail, brown eyes, modify the texture of her skin to give her a few more years, eliminate the make-up she would hardly have in the middle of the jungle, modify her clothing and that&#8217;s it.</p>

<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="576" src="https://www.darioriccio.com/wp-content/uploads/2023/01/Maria-1024x576.jpg" alt="" class="wp-image-808" srcset="https://www.darioriccio.com/wp-content/uploads/2023/01/Maria-1024x576.jpg 1024w, https://www.darioriccio.com/wp-content/uploads/2023/01/Maria-300x169.jpg 300w, https://www.darioriccio.com/wp-content/uploads/2023/01/Maria-768x432.jpg 768w, https://www.darioriccio.com/wp-content/uploads/2023/01/Maria-1536x864.jpg 1536w, https://www.darioriccio.com/wp-content/uploads/2023/01/Maria-2048x1152.jpg 2048w" sizes="(max-width: 1024px) 100vw, 1024px" /><figcaption class="wp-element-caption">Maria</figcaption></figure>

<h4 class="wp-block-heading">Sarah</h4>

<p>Finally <strong>Sarah</strong>. For ChatGPT he is about 25 years old, with short, wavy blond hair. Blue eyes, lively and inquisitive, around 1.70 meters, slender and muscular indicating an active and sporty person and of Northern European descent, with fair and delicate skin.</p>

<p>We rely on <em>Vivian</em>, various modifications until we make it something like the required. Clearly MetaHuman has many limitations, even more so in this online version. For example, on the body we have practically no possibility to operate, so &#8220;muscular&#8221; is a feature we will have to renounce, unless we model later. But, first I am not a 3D modeler; and this is a zero-budget project for educational purposes only. Also, for the same reason, it is not the case to waste more time on it than necessary.</p>

<p>Finally let&#8217;s leave them there; we&#8217;ll create the animations with mannequins and then we&#8217;ll retarget with MetaHuman characters directly in Unreal Engine 5.</p>

<h2 class="wp-block-heading">Conclusions</h2>

<p>Let&#8217;s limit ourselves here for today; in one of the next articles I will talk specifically about animation and Motion Capture with artificial intelligence (where it will be most useful to us), and then we will continue with environments, voices, music, and whatever else we need to finalize our little project.</p>

<p>I will give myself time to finish slowly, and possibly do other articles before continuing this one. For two reasons: they are elaborate operations, and artificial intelligence is in an explosive phase. An article written today may be old tomorrow. Maybe tonight.</p>

<p>Therefore, since we will need this information much more toward the end of this year for the actual production of a fulldome story, let us keep in mind all the news in the coming months.</p>

<p>As always, thank you for following me, and a hug.</p>
<p>L'articolo <a href="https://www.darioriccio.com/en/how-to-make-low-budget-films-with-artificial-intelligence-early-stages/">How to make low-budget films with Artificial Intelligence &#8211; Early stages</a> è apparso primo per <a href="https://www.darioriccio.com/en/">Future of Cinema - by Dario Riccio</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.darioriccio.com/en/how-to-make-low-budget-films-with-artificial-intelligence-early-stages/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
		<item>
		<title>What is experiential cinema</title>
		<link>https://www.darioriccio.com/en/what-is-experiential-cinema/</link>
					<comments>https://www.darioriccio.com/en/what-is-experiential-cinema/#comments</comments>
		
		<dc:creator><![CDATA[Dario Riccio]]></dc:creator>
		<pubDate>Sat, 10 Sep 2022 11:03:00 +0000</pubDate>
				<category><![CDATA[Cinema]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[cinema]]></category>
		<category><![CDATA[experiential cinema]]></category>
		<category><![CDATA[experiential media]]></category>
		<category><![CDATA[innovation]]></category>
		<guid isPermaLink="false">https://www.darioriccio.com/what-is-experiential-cinema/</guid>

					<description><![CDATA[<p>Immersive, interactive, non-linear, multi-sensory and algorithmic. Let's see how the cinema of the future will be experiential and engaging.</p>
<p>L'articolo <a href="https://www.darioriccio.com/en/what-is-experiential-cinema/">What is experiential cinema</a> è apparso primo per <a href="https://www.darioriccio.com/en/">Future of Cinema - by Dario Riccio</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>As promised in the last article, we&#8217;ll continue to analyze the cinema of the future in its immersive form. Immersivity certainly linked to the experience it can create in the viewer. In this article we will therefore go on to talk about <strong>experiential cinema</strong> through <em>old and new storytelling</em>, <em>first person</em>, <em>greater involvement of the viewer and the five senses</em>, and more <em>data</em> and <em>artificial intelligence</em>.</p>



<h2 class="wp-block-heading">What is a media, or &#8220;mediated story&#8221;</h2>


<div class="wp-block-image">
<figure class="alignleft size-medium"><img loading="lazy" decoding="async" width="300" height="200" src="https://www.darioriccio.com/wp-content/uploads/2022/09/cinema-300x200.jpg" alt="Interiors of a cinema" class="wp-image-533" srcset="https://www.darioriccio.com/wp-content/uploads/2022/09/cinema-300x200.jpg 300w, https://www.darioriccio.com/wp-content/uploads/2022/09/cinema.jpg 640w" sizes="(max-width: 300px) 100vw, 300px" /></figure></div>


<p class="wp-embed-aspect-4-3 wp-has-aspect-ratio">Cinema belongs to the &#8220;media&#8221; category. Also definable as &#8220;mediated story&#8221;, or &#8220;telling a story through something&#8221;. They can be a <strong>microphone and a speaker</strong> (radio), <strong>a film camera and a projector</strong> (cinema), <strong>a camera and a television</strong> (TV), <strong>a word processor and a printing press</strong> (newspapers). Any way of telling a story that is not &#8220;direct&#8221;, therefore different from traditional conversations, is defined as <em>media</em>.</p>



<p class="wp-embed-aspect-4-3 wp-has-aspect-ratio">Among these, there are the new <strong>experiential media</strong> to offer users potentially positive and beneficial results, increasing their involvement and above all empathy. The experienced experience makes them feel active and involved in the story, seeing, hearing, touching or experiencing situations never experienced before.</p>



<h2 class="wp-block-heading">The characteristics of traditional cinema</h2>



<p>Each media has advantages and disadvantages. Or, we could say, possibility and impossibility. Going into details, let&#8217;s quickly see the five peculiar characteristics of the media at the center of our attention: <em>cinema</em>.</p>



<ul class="wp-block-list">
<li><strong><a href="#static_and_linear_narrative_structure">Static and linear narrative structure</a></strong></li>



<li><strong><a href="#single_or_dual_mode">Single or dual mode</a></strong></li>



<li><strong><a href="#episodic">Episodic</a></strong></li>



<li><strong><a href="#third_person_perspective">Third person perspective</a></strong></li>



<li><strong><a href="#passive_audience">Passive audience</a></strong></li>
</ul>



<h3 class="wp-block-heading">Static and linear narrative structure</h3>



<p>It&#8217;s at the basis of cinema: a one-way narrative structure with <strong>beginning</strong>, <strong>development</strong>, <strong>climax</strong> and <strong>end</strong>. Fixed and static, it obviously cannot be changed by the viewer. Sure, there are <em>Flashbacks</em> or <em>Flashforwards</em>, but they don&#8217;t fundamentally change this approach to storytelling.</p>



<p>There is usually a causal chain: each passage narrated in the script leads to a subsequent passage which is a consequence of the previous one.</p>



<p>And <strong>the movies</strong>, of course, <strong>don&#8217;t change over time</strong>. Except for very rare cases of errors or problems occurred after the release of the film (for example <a href="https://it.wikipedia.org/wiki/Stanley_Kubrick" target="_blank" rel="noreferrer noopener"><strong>Kubrick</strong></a> cut the last minutes of The <a href="https://it.wikipedia.org/wiki/Shining_(film)" target="_blank" rel="noreferrer noopener"><em>Shining</em></a> after its release in theaters), these remain the same from the moment of publication, and forever.</p>



<p>A good analysis was made in 1992 by <a href="https://www.scu.edu/cas/religious-studies/faculty--staff/david-pinault/" target="_blank" rel="noreferrer noopener">Prof. David Pinault</a> in <a href="https://books.google.ch/books/about/Story_Telling_Techniques_in_the_Arabian.html?id=guHmLGJMbg4C&amp;redir_esc=y" target="_blank" rel="noreferrer noopener"><strong>Story-Telling Techniques in the Arabian Nights</strong></a>.</p>



<h4 class="wp-block-heading">Non-linear structure in traditional cinema</h4>


<div class="wp-block-image">
<figure class="alignleft size-medium"><img loading="lazy" decoding="async" width="300" height="300" src="https://www.darioriccio.com/wp-content/uploads/2022/09/1b8ebd33-ad7f-4656-b57d-d9a7adaa5426_progress_image_41-300x300.webp" alt="" class="wp-image-540" srcset="https://www.darioriccio.com/wp-content/uploads/2022/09/1b8ebd33-ad7f-4656-b57d-d9a7adaa5426_progress_image_41-300x300.webp 300w, https://www.darioriccio.com/wp-content/uploads/2022/09/1b8ebd33-ad7f-4656-b57d-d9a7adaa5426_progress_image_41-1024x1024.webp 1024w, https://www.darioriccio.com/wp-content/uploads/2022/09/1b8ebd33-ad7f-4656-b57d-d9a7adaa5426_progress_image_41-150x150.webp 150w, https://www.darioriccio.com/wp-content/uploads/2022/09/1b8ebd33-ad7f-4656-b57d-d9a7adaa5426_progress_image_41-768x768.webp 768w, https://www.darioriccio.com/wp-content/uploads/2022/09/1b8ebd33-ad7f-4656-b57d-d9a7adaa5426_progress_image_41-1536x1536.webp 1536w, https://www.darioriccio.com/wp-content/uploads/2022/09/1b8ebd33-ad7f-4656-b57d-d9a7adaa5426_progress_image_41.webp 1664w" sizes="(max-width: 300px) 100vw, 300px" /><figcaption class="wp-element-caption">Image made with <a href="https://www.midjourney.com/home/" target="_blank" rel="noreferrer noopener">Midjourney AI</a></figcaption></figure></div>


<p>In postmodern cinema there is actually a concrete attempt to modify this linear structure. Often attempts are made to &#8220;weave&#8221; the story, leaving the viewer to use intuition and irrational instinct to understand the film, while logic and drama are dissolved and diluted.</p>



<p>The linear structure often develops into a symphony composed of several parts with a non-linear structure. We will understand better in the following sections how this will develop even more in experiential / interactive cinema, forming various personalized narratives (which is reminiscent of videogames).</p>



<h3 class="wp-block-heading">Single or dual mode</h3>


<div class="wp-block-image">
<figure class="alignright size-medium is-resized"><img loading="lazy" decoding="async" src="https://www.darioriccio.com/wp-content/uploads/2022/09/occhio-blu-300x180.jpg" alt="Blue eye" class="wp-image-538" width="300" height="180" srcset="https://www.darioriccio.com/wp-content/uploads/2022/09/occhio-blu-300x180.jpg 300w, https://www.darioriccio.com/wp-content/uploads/2022/09/occhio-blu.jpg 640w" sizes="(max-width: 300px) 100vw, 300px" /></figure></div>


<p>By this we mean that <strong>cinema involves a maximum of one or two senses</strong>, <em>sight</em> and <em>hearing</em>. Clearly the current cinema is always dual, as in the “single mode” category we can only insert silent films.</p>



<p>Over the years, various attempts have been made to improve the involvement of the other senses. We think of great directors who manage to convey the sense of <em>taste</em> to the viewer, for example in films such as <em><a href="https://en.wikipedia.org/wiki/Eat_Drink_Man_Woman" target="_blank" rel="noreferrer noopener"><em>Eat Drink Man Woman</em></a></em>, <em><a href="https://en.wikipedia.org/wiki/Ratatouille_(film)" target="_blank" rel="noreferrer noopener">Ratatouille</a></em> or <em><a href="https://en.wikipedia.org/wiki/Mid-August_Lunch" target="_blank" rel="noreferrer noopener">Mid-August Lunch</a></em>. Clearly, it is a trick to our brains. But ultimately, cinema itself is.</p>



<h3 class="wp-block-heading">Episodic</h3>



<p>According to <a href="https://en.wikipedia.org/wiki/Jason_Mittell" target="_blank" rel="noreferrer noopener">Prof. Jason Mittell</a> in his book <a href="https://www.amazon.com/Complex-TV-Contemporary-Television-Storytelling/dp/0814769608/ref=sr_1_3?crid=BGWHRAR579HH&amp;keywords=jason+mittell&amp;qid=1663675770&amp;sprefix=jason+mittel%2Caps%2C227&amp;sr=8-3" target="_blank" rel="noreferrer noopener"><strong>Complex TV: The Poetics of Contemporary Television Storytelling</strong></a>, <strong>current cinema and media</strong> are generally episodic; that is, they <strong>tend</strong> to <strong>develop around an event</strong> or a series of interrelated events. This is even more true, of course, in the world of journalistic media. </p>



<p>A story is told based on facts and events, real or fictional. It is told from the point of view of the narrator, which makes it easier for the viewer to enjoy but at the same time empathically distances him from the story.</p>



<h3 class="wp-block-heading">Third person perspective</h3>


<div class="wp-block-image">
<figure class="alignright size-medium"><img loading="lazy" decoding="async" width="300" height="185" src="https://www.darioriccio.com/wp-content/uploads/2022/09/sfera-di-cristallo-300x185.jpg" alt="Crystal ball" class="wp-image-542" srcset="https://www.darioriccio.com/wp-content/uploads/2022/09/sfera-di-cristallo-300x185.jpg 300w, https://www.darioriccio.com/wp-content/uploads/2022/09/sfera-di-cristallo.jpg 640w" sizes="(max-width: 300px) 100vw, 300px" /></figure></div>


<p>This perspective is an important feature in our analysis. In fact, although possible, <strong>in traditional cinema first and second person perspectives have always been little used</strong>. In the history of communication in general, it was abundantly used only in the radio of the first half of the 20th century.</p>



<h4 class="wp-block-heading">Heart of Darkness, first-person radio adaptation</h4>



<p>As an example, consider <a href="https://en.wikipedia.org/wiki/Joseph_Conrad" target="_blank" rel="noreferrer noopener">Joseph Conrad</a>&#8216;s <a href="https://en.wikipedia.org/wiki/Heart_of_Darkness" target="_blank" rel="noreferrer noopener"><em>Heart of Darkness</em></a>. It was adapted for radio in 1938 by director <a href="https://it.wikipedia.org/wiki/Orson_Welles" target="_blank" rel="noreferrer noopener">Orson Welles</a> (famous for having made Americans believe they were under Martian attack through the radio show <em><a href="https://en.wikipedia.org/wiki/The_War_of_the_Worlds_(1938_radio_drama)" target="_blank" rel="noreferrer noopener">War of the Worlds</a></em>, in the same &#8217;38). The goal was to have the protagonist tell the story in first person directly. Interesting how Welles himself, at the time new to cinema, tried to persuade <a href="https://en.wikipedia.org/wiki/RKO_Pictures" target="_blank" rel="noreferrer noopener">RKO Pictures</a> to make the film version.</p>



<p>It had what it takes to become one of the greatest films of all time and, perhaps, raise public awareness on issues that instead, misinterpreted, led to the <em><a href="https://en.wikipedia.org/wiki/World_War_II" target="_blank" rel="noreferrer noopener">Second World War</a></em> the following year. But it was the use of the first person, in addition to the political themes little loved by the <em>majors</em>, that probably pushed Hollywood to not consider its feasibility. It was a drastic break from the rules, and the world wasn&#8217;t ready for it yet.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="Orson Welles - &#039;Heart of Darkness&#039; and &#039;Life with Father&#039;" width="678" height="381" src="https://www.youtube.com/embed/_QBJopm-GMQ?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</div></figure>



<p>In fact, <a href="https://en.wikipedia.org/wiki/Francis_Ford_Coppola" target="_blank" rel="noreferrer noopener">Coppola</a> tried to recover in 1979 with <a href="https://en.wikipedia.org/wiki/Apocalypse_Now" target="_blank" rel="noreferrer noopener"><em>Apocalypse Now</em></a>, only freely inspired by the novel &#8220;Heart of Darkness&#8221; as it is set in Vietnam and not in Africa. It is certainly too late to assist peace in Europe and in the world.</p>



<h4 class="wp-block-heading">First person in the history of cinema</h4>



<p>There are some sporadic cases of filmic use of first-person narration, especially in its early years. Sometimes little known cases, but which somehow tried to change the way of seeing things. First of all I think of <em><a href="https://en.wikipedia.org/wiki/Dr._Jekyll_and_Mr._Hyde_(1931_film)" target="_blank" rel="noreferrer noopener"><em>Dr. Jekyll and Mr. Hyde</em></a></em> of 1931, better known in Italy as <em>Dr. Jekyll and Mr. Hyde</em>, by <a href="https://en.wikipedia.org/wiki/Rouben_Mamoulian" target="_blank" rel="noreferrer noopener">Rouben Mamoulian</a>.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-4-3 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="Dr Jekyll and Mr. Hyde (1931) - YouTube" width="678" height="509" src="https://www.youtube.com/embed/GynMi0E7B5g?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</div></figure>



<p>Other cases were <a href="https://en.wikipedia.org/wiki/William_Dieterle" target="_blank" rel="noreferrer noopener">William Dieterle</a>&#8216;s 1934 film <em><a href="https://en.wikipedia.org/wiki/The_Firebird_(1934_film)" target="_blank" rel="noreferrer noopener">The Firebird</a></em>. Or even <em><a href="https://en.wikipedia.org/wiki/Lady_in_the_Lake" target="_blank" rel="noreferrer noopener"><em>Lady in the Lake</em></a></em> shot in 1947 by <a href="https://en.wikipedia.org/wiki/Robert_Montgomery">Robert Montgomery</a>, and <em><a href="https://en.wikipedia.org/wiki/Dark_Passage_(film)" target="_blank" rel="noreferrer noopener">Dark Passage</a></em> by <a href="https://en.wikipedia.org/wiki/David_Goodis" target="_blank" rel="noreferrer noopener">David Goodis</a> of the same year.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-4-3 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="Dark Passage Official Trailer #1 - Humphrey Bogart Movie (1947) HD" width="678" height="509" src="https://www.youtube.com/embed/UFd0xohHqTI?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</div></figure>



<p>Two other examples, decidedly more modern, we will see shortly in the section on experiential cinema, as they are more useful for a comparison with the cinema to come.</p>



<p>I have cited only examples of works in first person, because talking about works in the third would be impossible or useless&#8230; They are practically almost all the films in existence. And then because, with a view to creating the cinema of the more experiential future, I believe that these ideas must be well taken into consideration.</p>



<p><strong>Then you will tell me what you think, I&#8217;m interested in being closer to the technical world than to film criticism.</strong></p>



<h3 class="wp-block-heading">Passive audience</h3>


<div class="wp-block-image">
<figure class="alignleft size-medium"><img loading="lazy" decoding="async" width="300" height="200" src="https://www.darioriccio.com/wp-content/uploads/2022/09/bimba-legge-libro-300x200.jpg" alt="Una bambina sta leggendo un libro seduta in un prato" class="wp-image-544" srcset="https://www.darioriccio.com/wp-content/uploads/2022/09/bimba-legge-libro-300x200.jpg 300w, https://www.darioriccio.com/wp-content/uploads/2022/09/bimba-legge-libro.jpg 640w" sizes="(max-width: 300px) 100vw, 300px" /></figure></div>


<p>As we said, <strong>the story</strong> in the current or past media <strong>is told in the name and on behalf of the narrator</strong>. Whether it&#8217;s the writer or the director, the journalist or the speaker, everyone shows you what they want. Have you ever wanted to look at something to the side, or behind the camera, but the direction didn&#8217;t show it to you? </p>



<p>It happens particularly often with sport on TV, and it is perhaps for this reason that <strong>television itself is among the first media to have &#8220;progressed&#8221; towards a direction decided by the viewer</strong>. In Italy, <a href="https://simonesalvador.it/rubriche_sportinmedia/focus_sport_in_media/evoluzione-riprese-tv-grafiche-statistiche-partite-calcio-in-italia-serie-a-bundesliga-liga" target="_blank" rel="noreferrer noopener">as Wenner Gatta mentions</a>, when the &#8220;<a href="https://www.spidercam.tv" target="_blank" rel="noreferrer noopener">Spidercam</a>&#8221; was inserted on the football fields, the viewer could choose with the remote control whether to see the meeting with the classic direction or through the spider camera only. And from there, above all <a href="https://www.sky.com" target="_blank" rel="noreferrer noopener">Sky</a> has continued to develop technology a lot by exploiting multiple transmission channels for the same event.</p>



<p>After a study of the cinema that was, and still is, we are finally going to analyze the characteristics of the new cinema, projected towards the future, to also understand <strong>how to overcome the problems of the last 127 years</strong>.</p>



<h2 class="wp-block-heading">The characteristics of experiential cinema: the future</h2>



<p>The five characteristics that we will be able to find, all or in part, in the cinema of the future, according to what technology currently makes available, can be:</p>



<ul class="wp-block-list">
<li><a href="#immersive_cinema">Immersivity</a></li>



<li><a href="#interactive_non-linear_and_social">Interactivity, non-linearity and sociability</a></li>



<li><a href="#multi-sensory_presentation">Multi-sensory presentation</a></li>



<li><a href="#algorithmic_customized_in_real_time_thanks_to_data">Algorithmic, customized in real time thanks to data</a></li>



<li><a href="#first_person_perspective">First person perspective</a></li>
</ul>



<h3 class="wp-block-heading">Immersive Cinema</h3>



<p>We already know some partially experiential media. We think of <strong>immersive virtual reality and augmented reality platforms</strong>. As we saw <a href="https://www.darioriccio.com/en/immersive-cinema-is-the-future/#why_virtual_reality_is_not_popular" target="_blank" rel="noreferrer noopener">in the last article</a>, these are fully inserted in the line of continuity between real and virtual world hypothesized by Paul Milgram.</p>



<p>Immersion means &#8220;<em>enveloping the user in a real physical space using augmented or mixed reality on a portable or wearable device, also including haptic interfaces</em>&#8220;.</p>



<p>I found one of the first practical examples in the paper <em>&#8220;<a href="https://sites.cs.ucsb.edu/~holl/pubs/hollerer-1999-iswc.pdf" target="_blank" rel="noreferrer noopener">Situated Documentaries: Embedding Multimedia Presentations in the Real World</a>&#8220;</em>, by <em>Tobias Höllerer</em>, <em>Steven Feiner</em> and <em>John Pavlik</em> and taken from the <em>International Symposium on Wearable Computers</em> of 1999.</p>


<div class="wp-block-image">
<figure class="alignleft size-medium is-resized"><img loading="lazy" decoding="async" src="https://www.darioriccio.com/wp-content/uploads/2022/08/documentari-situati-1-246x300.jpg" alt="" class="wp-image-450" width="246" height="300" srcset="https://www.darioriccio.com/wp-content/uploads/2022/08/documentari-situati-1-246x300.jpg 246w, https://www.darioriccio.com/wp-content/uploads/2022/08/documentari-situati-1-840x1024.jpg 840w, https://www.darioriccio.com/wp-content/uploads/2022/08/documentari-situati-1-768x936.jpg 768w, https://www.darioriccio.com/wp-content/uploads/2022/08/documentari-situati-1.jpg 878w" sizes="(max-width: 246px) 100vw, 246px" /><figcaption class="wp-element-caption">Researcher wearing the MJW system.<br>Image from <strong><em>&#8220;<a href="https://sites.cs.ucsb.edu/~holl/pubs/hollerer-1999-iswc.pdf" target="_blank" rel="noreferrer noopener">Situated Documentaries: Embedding Multimedia Presentations in the Real World</a>&#8220;</em></strong>.</figcaption></figure></div>


<p>These &#8220;set documentaries&#8221; were entirely based on wearables, to <strong>integrate novels and documentaries in real-world locations</strong>. The system is very reminiscent of the current AR glasses, obviously with the technology available in 1999. It was a <strong>backpack with GPS tracker</strong> and <strong>360 ° video camera</strong> (developed by <em><a href="http://www.cs.columbia.edu/~nayar/" target="_blank" rel="noreferrer noopener">Shree Nayar</a></em>, of Columbia University), a type of <strong>handheld computer</strong> with graphics, audio and video, and <strong>augmented reality glasses</strong> capable of signaling in the real environment the points of interest. It was called MJW (<em>Mobile Journalist Workstation</em>).</p>



<p>The glasses could also reproduce rudimentary 360 ° videos superimposed on the real, and the gaze was the main aiming system. By staring at an object in the real world for at least half a second, it was selected producing related info and multimedia files. You could also travel in time by touching the desired year on the handheld display.</p>



<p>A great example of what augmented reality will be 20 years later. AR is in fact developing rapidly, becoming in common use since 2018 following the implementation in iPhones of the native <a href="https://developer.apple.com/augmented-reality/" target="_blank" rel="noreferrer noopener">ARKit</a> api (which, by the way, I believe to date is the latest noteworthy innovation in the smartphone field).</p>



<p>The first point of our cinema will therefore be immersion. But certainly a different immersion from that seen so far. The basis will in fact be the story told, the cinema. We will not immerse ourselves in the world, we will not immerse ourselves alone or in <a href="https://www.oculus.com/horizon-worlds/" target="_blank" rel="noreferrer noopener">virtual common environments</a>. <strong>The immersion will be produced by the 360 ° screen, by the stereoscopy, and by the &#8220;hall&#8221;, or dome, with inside elements directly linked to the narrated story</strong>.</p>



<p>The writers will have to be really good, to develop plots that make the viewer feel &#8220;involved&#8221;, without being taken for granted. As is obvious, the first thing that jumps to my mind: to keep the protagonist of the story constantly seated, who must impersonate us spectators. It is at best a starting point that should not be underestimated, but above all the brain storming will be really interesting in the early days. </p>



<h3 class="wp-block-heading">Interactive, non-linear and social</h3>



<p><em>Word to the viewer</em>; <em>choice</em>. These may be the keywords for the non-linear structure of experiential cinema; which is therefore more complex.</p>



<p>Although without particular rules, it maintains its own logic based on temporal <strong>sequence</strong> (or <strong>order</strong>), <strong>duration</strong> and <strong>frequency</strong>, which are the concept of division proposed by the French essayist <a href="https://literariness.org/2016/12/03/gerard-genette-and-structural-narratology/" target="_blank" rel="noreferrer noopener">Gérard Gennette in the field of literary fiction</a>, then introduced in the field of film criticism by <a href="https://histart.umontreal.ca/repertoire-departement/professeurs/professeur/in/in13591/sg/Andr%C3%A9%20Gaudreault/" target="_blank" rel="noreferrer noopener">Andre Gaudreault</a> and <a href="http://www.davidbordwell.net" target="_blank" rel="noreferrer noopener">David Bordwell</a>.</p>



<h4 class="wp-block-heading">The order of events in the stories</h4>


<div class="wp-block-image">
<figure class="alignright size-medium"><img loading="lazy" decoding="async" width="225" height="300" src="https://www.darioriccio.com/wp-content/uploads/2022/09/foto-ricordo-225x300.jpg" alt="Lots of Polaroid photos spread out on the floor with a girl's foot close by." class="wp-image-547" srcset="https://www.darioriccio.com/wp-content/uploads/2022/09/foto-ricordo-225x300.jpg 225w, https://www.darioriccio.com/wp-content/uploads/2022/09/foto-ricordo.jpg 640w" sizes="(max-width: 225px) 100vw, 225px" /></figure></div>


<p>In this section we open a general parenthesis on the stories; there are in fact characteristics in common with the future immersive cinema. Gennette clarified that the event could have occurred:</p>



<ul class="wp-block-list">
<li><em>before the narration</em> (<strong>analyses</strong> or flashbacks)</li>
</ul>



<ul class="wp-block-list">
<li><em>in the future</em> (therefore only announced or expected, the <strong>prolixes</strong>).</li>
</ul>



<ul class="wp-block-list">
<li>Again, events can be narrated in <em>a different order from how they happened</em> (<strong>anachronia</strong>), used to make the story more compelling.</li>
</ul>



<ul class="wp-block-list">
<li>More rarely there can be <em>a movement between one narrative level and another</em> (<strong>metalepsis</strong>).</li>
</ul>



<p>An example is the &#8220;author&#8217;s metalexy&#8221;, a sort of passage by the author from external to internal to the story, or conversely if a character becomes a narrator.</p>



<p>Another literary example is of the poet <em><a href="https://it.wikipedia.org/wiki/Publio_Virgilio_Marone" target="_blank" rel="noreferrer noopener">Virgil</a></em>, who &#8220;kills&#8221; Dido in canto IV of the <em><em><a href="https://en.wikipedia.org/wiki/Aeneid" target="_blank" rel="noreferrer noopener">Aeneid</a></em></em>, or <a href="https://en.wikipedia.org/wiki/Denis_Diderot" target="_blank" rel="noreferrer noopener"><em>Diderot</em></a> who writes in <a href="https://en.wikipedia.org/wiki/Jacques_the_Fatalist" target="_blank" rel="noreferrer noopener"><em>Jacques the Fatalist</em></a>: &#8220;Who could prevent me from <em>marrying</em> the Master and <em>making him a beak</em>?&#8221;. Both examples taken from <a href="http://dspace.unive.it/bitstream/handle/10579/15024/815358-1208192.pdf?sequence=2" target="_blank" rel="noreferrer noopener"><em><strong>Armando Mollica Bonivento</strong></em>&#8216;s doctoral thesis at the Ca&#8217;Foscari University</a> (in Italian), which I invite you to read for greater understanding.</p>



<h4 class="wp-block-heading">The duration of events in the stories</h4>



<p>So, after the order (event that happened before, future event, narration in order different from the real or passage between different narration levels), we have the <strong>duration</strong>. Also definable as the &#8220;rhythm&#8221;, the &#8220;speed&#8221; at which events are told.</p>



<p>We can mainly divide it into four types:</p>



<ul class="wp-block-list">
<li><strong>ellipses</strong> (very accelerated rhythm), with frequent chronological jumps;</li>



<li><strong>synthesis</strong> (relatively fast pace), in which a story is summarized in its main points. They can be of variable length.</li>



<li><strong>scene</strong>: relatively slow, it is the classic narration almost in real time. An example are the dialogues;</li>



<li><strong>descriptive</strong>: no progress in history, we stop to describe a certain moment.</li>
</ul>



<p>Clearly, these types can be combined. We can have, for example, a synthesis inserted within a dialogue.</p>



<h4 class="wp-block-heading">The frequency of events in the stories</h4>



<p>Frequency is nothing more than the relationship between how many times a certain event occurs in reality (even invented), and how many it is told. If, in practice, the same event is narrated several times (or the same statement of a character repeated).</p>



<p>I leave you the <a href="http://www.signosemio.com/genette/narratology.asp" target="_blank" rel="noreferrer noopener">link to an interesting article on the issue</a>, from which the following image is taken.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="642" src="https://www.darioriccio.com/wp-content/uploads/2022/08/summury-genette-narrative-typology-1024x642.jpg" alt="Synthesis of Genette's narrative typology." class="wp-image-456" srcset="https://www.darioriccio.com/wp-content/uploads/2022/08/summury-genette-narrative-typology-1024x642.jpg 1024w, https://www.darioriccio.com/wp-content/uploads/2022/08/summury-genette-narrative-typology-300x188.jpg 300w, https://www.darioriccio.com/wp-content/uploads/2022/08/summury-genette-narrative-typology-768x481.jpg 768w, https://www.darioriccio.com/wp-content/uploads/2022/08/summury-genette-narrative-typology-1536x963.jpg 1536w, https://www.darioriccio.com/wp-content/uploads/2022/08/summury-genette-narrative-typology.jpg 1945w" sizes="(max-width: 1024px) 100vw, 1024px" /><figcaption class="wp-element-caption">Lucie Guillemette and Cynthia Lévesque (2016), «Narratology», in Louis Hébert (dir.), <em>Signo</em> [online], Rimouski (Quebec), <a href="http://www.signosemio.com/genette/narratology.asp" target="_blank" rel="noreferrer noopener">http://www.signosemio.com/genette/narratology.asp</a>.</figcaption></figure>



<h3 class="wp-block-heading">The cinema of the future has an <em>intersubjective non-linear structure</em></h3>


<div class="wp-block-image">
<figure class="alignleft size-medium"><img loading="lazy" decoding="async" width="300" height="300" src="https://www.darioriccio.com/wp-content/uploads/2022/09/Dario__the_usual_neXt_The_cinema_of_the_future_has_an_intersubj_b484604e-bbc6-4ffc-b2ac-34982a7f1a95-300x300.png" alt="Abstract image representing the interior of a futuristic cinema." class="wp-image-551" srcset="https://www.darioriccio.com/wp-content/uploads/2022/09/Dario__the_usual_neXt_The_cinema_of_the_future_has_an_intersubj_b484604e-bbc6-4ffc-b2ac-34982a7f1a95-300x300.png 300w, https://www.darioriccio.com/wp-content/uploads/2022/09/Dario__the_usual_neXt_The_cinema_of_the_future_has_an_intersubj_b484604e-bbc6-4ffc-b2ac-34982a7f1a95-150x150.png 150w, https://www.darioriccio.com/wp-content/uploads/2022/09/Dario__the_usual_neXt_The_cinema_of_the_future_has_an_intersubj_b484604e-bbc6-4ffc-b2ac-34982a7f1a95.png 400w" sizes="(max-width: 300px) 100vw, 300px" /></figure></div>


<p>After this long interlude, let&#8217;s try to understand why the story, or the screenplay, is decidedly more complex for the cinema of the future. In fact, all these elements must be inserted within an <strong>intersubjective non-linear structure</strong>. That is, the story can go back and forth in time, in an environment common to other viewers who may want to make different choices from ours.</p>



<p><strong>Intersubjectivity is the biggest problem to be solved in writing new scripts</strong>. In fact, if interactivity is a concept already well known thanks to video games, interactivity in common between several people, with necessarily a screen that shows everyone the same images, presumes that a small democracy is created inside the room.</p>



<p>The intersubjective non-linear structure has a great advantage: <strong>it respects the viewer</strong>. It grants the right to choose one&#8217;s own story, to judge the morality of some scenes. The viewer becomes the center of the film as well as a part of it. The current cinema is definitely too one-sided, and <strong>if it has lived unchanged for so many years it is only for the simplicity</strong> (inherent in the characteristics) <strong>of using it as a means of political and commercial propaganda</strong>.</p>



<p>Thinking about the average use of cinema, which is also and above all a moment of relaxation and not very active entertainment, we must not however fall into the temptation to insert excessive &#8220;gamification&#8221;, transforming it into a video game. I mean, nowadays we go to the cinema to relax with friends or family, to spend time without thinking too much. And choosing implies thinking &#8230; This is why interactivity must be limited and not even mandatory, and it probably won&#8217;t be the top priority in creating the cinema of the future.</p>



<p>We will see in some future articles why interactive cinema has not been successful in, albeit few, past experiences. But it is related to this.</p>



<h3 class="wp-block-heading">Multi-sensory presentation</h3>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="The Piano (1993) Official Trailer - Holly Hunter, Anna Paquin Movie HD" width="678" height="381" src="https://www.youtube.com/embed/cyTn4XIYH8M?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</div><figcaption class="wp-element-caption">Trailer of the film &#8220;The Piano&#8221; (in Italian &#8220;Lezioni di piano&#8221;)</figcaption></figure>



<p>Experiential media may seem like something new in recent years, but it&#8217;s not entirely true. For centuries, humanity has developed its peculiar characteristics, improving the technology available in small but constant steps.</p>



<h4 class="wp-block-heading">Wearable devices to engage the senses</h4>


<div class="wp-block-image">
<figure class="alignleft size-medium"><img loading="lazy" decoding="async" width="300" height="167" src="https://www.darioriccio.com/wp-content/uploads/2022/08/abacus-ring-300x167.jpg" alt="Chinese abacus ring." class="wp-image-460" srcset="https://www.darioriccio.com/wp-content/uploads/2022/08/abacus-ring-300x167.jpg 300w, https://www.darioriccio.com/wp-content/uploads/2022/08/abacus-ring.jpg 498w" sizes="(max-width: 300px) 100vw, 300px" /><figcaption class="wp-element-caption">Image from <a href="http://en.chinaculture.org/classics/2010-04/20/content_383263_4.htm" target="_blank" rel="noreferrer noopener">chinaculture.org</a>.</figcaption></figure></div>


<p>According to what was reported by the <a href="https://comminfo.rutgers.edu/pavlik-john-v" target="_blank" rel="noreferrer noopener">Prof. John V. Pavlik</a> in the book &#8220;<em><a href="http://cup.columbia.edu/book/journalism-in-the-age-of-virtual-reality/9780231184496" target="_blank" rel="noreferrer noopener">Journalism in the Age of Virtual Reality</a></em>&#8220;, the first experience of&#8221; wearables &#8220;devices can be summarized in the <strong><a href="http://il-trafiletto.blogspot.com/2014/03/e-questa-la-prima-tecnologia.html" target="_blank" rel="noreferrer noopener">Chinese invention of the ring / abacus</a></strong>, wearable measuring instrument dating back to the <a href="https://www.britannica.com/topic/Qing-dynasty" target="_blank" rel="noreferrer noopener"><em>Qing</em> dynasty</a> of the seventeenth century.</p>



<p>Following in Europe, in 1780 the &#8220;pedometer&#8221; was developed, a step counter, to arrive in 1965 with the American attempt (unsuccessful) to create the first exoskeleton (<em><a href="https://en.wikipedia.org/wiki/Hardiman" target="_blank" rel="noreferrer noopener">Hardiman</a></em>) to allow humans to lift up to 650 kg.</p>



<p>In recent years, developments have certainly been much faster, also thanks to the logarithmic scale of the technology that hardly stops once it has started. To understand, did you know that in 2004, just 18 years ago, the GoPro wearable camera came out &#8230; And that it even used 35mm film?</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="GoPro 35mm Film Camera!" width="678" height="381" src="https://www.youtube.com/embed/48I7avgMcU0?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</div></figure>



<h4 class="wp-block-heading">Cinema stimulates us physiologically and sensually</h4>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>My body is not just an object among all objects, but an object sensitive to all others, which reverberates at all sounds, vibrates in all colors and gives words their primal meaning through the way it receives them.</p>
<cite><a href="https://en.wikipedia.org/wiki/Maurice_Merleau-Ponty" target="_blank" rel="noreferrer noopener">Maurice Merleau-Ponty</a> in <em>Phenomenology of Perception</em></cite></blockquote>



<p>At the beginning of this paragraph I inserted the trailer for the film <em>Piano Lessons</em>, <a href="https://en.wikipedia.org/wiki/Jane_Campion" target="_blank" rel="noreferrer noopener"><strong>Jane Campion</strong></a>&#8216;s 1993 masterpiece. I chose it as an excellent example of how current cinema tries, in more or less orthodox ways, to deceive the brain in order to involve senses not directly involved (touch in this case). I also invite you to review the last article which talked about <a href="https://www.darioriccio.com/en/immersive-cinema-is-the-future/#matthew_shifrin_legos_and_the_cinema_of_the_future" target="_blank" rel="noreferrer noopener"><em>Matthew Shifrin</em></a> and his Legos for the blind.</p>



<p>This is the magic of cinema. Art in this sector has reached unimaginable heights, even if you try to think about how to go further. How to materially stimulate the other senses. Although already in the 1940s, the philosopher <em><a href="https://en.wikipedia.org/wiki/Siegfried_Kracauer" target="_blank" rel="noreferrer noopener">Siegfried Kracauer</a></em> wrote:</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>The material elements that appear in the films directly stimulate the material layers of the human being: his nerves, his senses, his entire physiological substance.</p>
<cite>Siegfried Kracauer</cite></blockquote>



<h4 class="wp-block-heading">How to involve the five senses in the cinema</h4>



<p><strong>How then to involve the five senses, or at least more than two, in cinema?</strong> It will be necessary to proceed in stages, as technology allows it. So let&#8217;s consider the experiments done in the past, to later understand how WE can engage the five senses for our viewers.</p>



<h5 class="wp-block-heading">Touch</h5>


<div class="wp-block-image">
<figure class="alignleft size-medium"><img loading="lazy" decoding="async" width="300" height="290" src="https://www.darioriccio.com/wp-content/uploads/2022/09/William-Castle-1946-300x290.jpg" alt="" class="wp-image-556" srcset="https://www.darioriccio.com/wp-content/uploads/2022/09/William-Castle-1946-300x290.jpg 300w, https://www.darioriccio.com/wp-content/uploads/2022/09/William-Castle-1946.jpg 496w" sizes="(max-width: 300px) 100vw, 300px" /><figcaption class="wp-element-caption">William Castle in 1946</figcaption></figure></div>


<p>Meanwhile, the <strong>touch</strong>: <em><a href="https://en.wikipedia.org/wiki/William_Castle" target="_blank" rel="noreferrer noopener">William Castle</a></em>, in 1959 shot the horror <strong><a href="https://en.wikipedia.org/wiki/The_Tingler" target="_blank" rel="noreferrer noopener">The Tingler</a></strong>. He inserted a vibrating device called &#8220;Percepto!&#8221; Into the seats of some cinemas, which was synchronized with the action. Castle himself was an evil genius &#8230; Before the screening of <strong>Macabre</strong> in 1958, he had everyone deliver a $ 1,000 insurance policy in the event of death or fear during the film. Then during the 1959 film <strong><a href="https://en.wikipedia.org/wiki/House_on_Haunted_Hill" target="_blank" rel="noreferrer noopener">House on Haunted Hill</a></strong>, he let a phosphorescent skeleton enter above the stalls. Based on a pulley system called &#8220;<strong>Emergo</strong>&#8220;.</p>



<p>And finally there is &#8220;<strong>Illusion-O</strong>&#8220;, launched with the film <strong><a href="https://en.wikipedia.org/wiki/13_Ghosts" target="_blank" rel="noreferrer noopener">13 Ghosts</a></strong>: all the elements in the frame, with the exception of the ghosts, were subjected to a blue filter. The ghosts instead had a red filter, and were superimposed on the frame. The audience received cards with red and blue filters: looking through the blue filter, you couldn&#8217;t see the ghosts. Through the red filter, however, they could be seen.</p>


<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" decoding="async" width="500" height="284" src="https://www.darioriccio.com/wp-content/uploads/2022/09/Illusiono.jpeg" alt="" class="wp-image-558" srcset="https://www.darioriccio.com/wp-content/uploads/2022/09/Illusiono.jpeg 500w, https://www.darioriccio.com/wp-content/uploads/2022/09/Illusiono-300x170.jpeg 300w" sizes="(max-width: 500px) 100vw, 500px" /></figure></div>


<p>Still on the touch, obviously the modern <strong>4D cinemas</strong> that we all know have the entire vibrating chairs, as well as ropes that touch the legs, fans for the wind effect (hot or cold) and sprinkles of water for humidity.</p>



<h5 class="wp-block-heading">Smell</h5>



<p><strong>Smell</strong>: a strategy was adopted by <em><a href="https://en.wikipedia.org/wiki/John_Waters" target="_blank" rel="noreferrer noopener">John Waters</a></em> for the 1981 film <strong><a href="https://en.wikipedia.org/wiki/Polyester_(film)" target="_blank" rel="noreferrer noopener">Polyester</a></strong>. This is the &#8220;Olorama&#8221; system, basically cards with scratch numbers. Each number has a smell (rose, pizza etc &#8230;), which can be smelled at the required moment (a small number appeared on the screen).</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-4-3 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="Inside the Odorama Process" width="678" height="509" src="https://www.youtube.com/embed/k3WPpbEIYSs?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</div><figcaption class="wp-element-caption">Creation of the cards for the Odorama system</figcaption></figure>



<p>Another 1960 attempt was <strong><a href="https://www.wired.com/2006/12/a-brief-history-2-2/" target="_blank" rel="noreferrer noopener">Smell-O-Vision</a></strong>, used only in <strong><a href="https://en.wikipedia.org/wiki/Scent_of_Mystery" target="_blank" rel="noreferrer noopener">Scent of Mystery</a></strong> produced by <em><a href="https://en.wikipedia.org/wiki/Mike_Todd_Jr." target="_blank" rel="noreferrer noopener">Mike Todd Jr</a></em> (son of the famous Mike Todd Senior, author of <strong><a href="https://en.wikipedia.org/wiki/Around_the_World_in_80_Days_(1956_film)" target="_blank" rel="noreferrer noopener">Around the World in 80 Days</a></strong>). Smell-O-Vision involved the introduction of up to 30 evocative smells into the stalls through tubes that led to individual seats in the room, with perfume bottles held on a rotating drum.</p>



<p>Today a company has taken up the concept in a modern industrial production, called <a href="https://www.olorama.com" target="_blank" rel="noreferrer noopener"><strong>Olorama</strong></a>. With whom it would be nice to collaborate; objectively, the system seems much more functional than smelling cards (then actually reused only in a couple of children&#8217;s films).</p>



<p>The <strong>criticisms</strong> he received are interesting. According to the <a href="https://entertainment.time.com/2012/12/10/fantasound-to-odorama-10-unusual-movie-technologies/slide/smell-o-vision/" target="_blank" rel="noreferrer noopener">Times</a>, some viewers complained of delays between the smell and the scene, others found the scents mixed in an unpleasant way, <em>Henny Youngman</em> said he didn&#8217;t understand the film because he had a cold.</p>



<p>Other inconveniences to pay close attention to are nausea and headaches caused by too strong and persistent fragrances, possible discomfort, distractions and heavy air.</p>



<p>For the sake of completeness of information, I quote <em><a href="https://en.wikipedia.org/wiki/Walter_Reade" target="_blank" rel="noreferrer noopener">Walter Reade Jr</a></em>&#8216;s <strong><a href="https://www.nytimes.com/1959/12/10/archives/smells-of-china-behind-great-wall-uses-aromarama.html" target="_blank" rel="noreferrer noopener">AromaRama</a></strong>. Key difference with Smell-O-Vision? Simply, the AromaRama used the air conditioning system for the diffusion of aromas. Cunning.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>Eating with another is a way of saying: &#8220;I&#8217;m with you, I like you, let&#8217;s form a community together&#8221;.</p>
<cite>Thomas C. Foster</cite></blockquote>



<h5 class="wp-block-heading">Taste</h5>



<p>We have seen touch and smell, <strong>taste</strong> is obviously lacking. Although progress has been made towards systems of &#8220;transmission&#8221; of taste (through objects to be licked), I do not think they are yet developed enough and, above all, we are not yet ready to welcome them. And I don&#8217;t know if we ever will &#8230;</p>



<p>However, during some screenings of <strong><a href="https://en.wikipedia.org/wiki/Willy_Wonka_%26_the_Chocolate_Factory" target="_blank" rel="noreferrer noopener">Willy Wonka &amp; the Chocolate Factory</a></strong>, <strong>Wonka chocolates were provided to the spectators</strong>. And from this I had the idea of directly providing the screened food, really to the spectators. A double advantage: for the identification in the film, and for the cinema economy and customers who would respectively sell and buy better food than popcorn and Coca-Cola.</p>


<div class="wp-block-image">
<figure class="alignleft size-full"><img loading="lazy" decoding="async" width="250" height="250" src="https://www.darioriccio.com/wp-content/uploads/2022/09/edible-cinema2.jpeg" alt="Edible Cinema menu." class="wp-image-574" srcset="https://www.darioriccio.com/wp-content/uploads/2022/09/edible-cinema2.jpeg 250w, https://www.darioriccio.com/wp-content/uploads/2022/09/edible-cinema2-150x150.jpeg 150w" sizes="(max-width: 250px) 100vw, 250px" /></figure></div>


<p>I soon discovered how this idea is not exactly original: in 2012 in London, more precisely in Notting Hill, <strong><a href="http://www.ginmonkey.co.uk/2012/05/14/edible-cinema/" target="_blank" rel="noreferrer noopener">Edible Cinema</a></strong> took place. It was a collaboration between the Soho House team, organizer <em>Polly Betton</em> and experimental food designer <em>Andrew Stellitano</em>. Basically, each present had numbered bags and glasses containing food and drink, on the armchairs there was also a menu explaining the meals, and a woman appeared on the side of the screen during the film to indicate the time to eat or drink each number. </p>



<h3 class="wp-block-heading">Algorithmic, customized in real time thanks to data</h3>



<p>In a data-centric society, cinema cannot stand by and watch. Of course, in respect of privacy and possibly without using this data for not very noble purposes.</p>



<p>A great use could come from the geolocation inside the dome, to eventually send different signals to the different spectators. But, above all, it will be possible to take into account the direction of the gaze of the latter to understand what is more interesting, and to have a more passive &#8220;input device&#8221;, and therefore less tiring, due to the interactivity we have just talked about.</p>



<p>Other anonymized data, such as physiological responses moment by moment, may be useful for the development of subsequent films and evaluate the reactions of the audience. Which, in a subsequent development of experiential cinema, can also be exploited within the same story (for example, interactively manage volumes to increase or decrease human reactions).</p>



<h4 class="wp-block-heading">Women&#8217;s Aid, a prime example of algorithmic interactive advertising</h4>



<p>At World Women&#8217;s Day in 2015, an interactive blow-up depicting the face of a woman victim of violence was installed in the <a href="https://en.wikipedia.org/wiki/Canary_Wharf" target="_blank" rel="noreferrer noopener">Canary Wharf</a> business center in London. A <a href="https://en.wikipedia.org/wiki/Face_detection" target="_blank" rel="noreferrer noopener">face detection</a> camera was used to update a counter and change the image whenever a passerby paid attention to the advertisement. It was a typical example, albeit in the advanced advertising sector, of using data to modify the result obtained.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="Look At Me: Women&#039;s Aid interactive billboard" width="678" height="381" src="https://www.youtube.com/embed/wEybVOerb9Q?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</div><figcaption class="wp-element-caption">Look At Me: <strong>Women&#8217;s Aid</strong> Interactive Billboard</figcaption></figure>



<h4 class="wp-block-heading">Artificial intelligence for storytelling</h4>



<p>Already today many stories are created with artificial intelligence, as for example the American <a href="https://www.ap.org/discover/artificial-intelligence" target="_blank" rel="noreferrer noopener">Associated Press</a> does. The <em>Times</em> also created an algorithmic robot, the <a href="https://www.latimes.com/people/quakebot" target="_blank" rel="noreferrer noopener">QuakeBot</a>, which automatically acquires data from the <a href="https://www.usgs.gov" target="_blank" rel="noreferrer noopener">U.S. Geological Survey</a>, the US organization of earthquake analysis, to automatically write the complete article with magnitude, epicenter and time. The human editor only has to verify its correctness and publish.</p>


<div class="wp-block-image">
<figure class="alignright size-medium"><img loading="lazy" decoding="async" width="300" height="45" src="https://www.darioriccio.com/wp-content/uploads/2022/09/lo-times-300x45.jpg" alt="Los Angeles Times newspaper." class="wp-image-576" srcset="https://www.darioriccio.com/wp-content/uploads/2022/09/lo-times-300x45.jpg 300w, https://www.darioriccio.com/wp-content/uploads/2022/09/lo-times.jpg 660w" sizes="(max-width: 300px) 100vw, 300px" /></figure></div>


<p>These robots, these artificial intelligences, are led to the development of increasingly engaging, interactive and multisensory stories. What will be fundamental for the cinema of the future, to assist human screenwriters in the writing of increasingly complex and autonomously unmanageable stories.</p>



<p>Furthermore, they will be able to exploit the data present in the world, and in the cinemas themselves. To create &#8220;tailor-made&#8221; stories for the target audience. </p>



<p>Artificial intelligence itself is, and will increasingly be, used in technical video production. Google, for example, has the <strong><em>Jump</em></strong> compiler (well described <a href="https://storage.googleapis.com/pub-tools-public-publication-data/pdf/45617.pdf" target="_blank" rel="noreferrer noopener">in their own paper</a>), which deals with the union, stitching, of 16 high quality video streams to obtain a complete 360 ° VR video. The main result obtained was the elimination of much of the <em>latency</em>, to therefore increase the sense of reality of the image shown.</p>



<p>Ultimately, AI will certainly be the center of attention. And, to avoid controversy, it must also be used with lead gloves, focusing on privacy and the importance of the human being.</p>



<h3 class="wp-block-heading">First person perspective</h3>



<p>We already know some partially experiential media. We think of immersive virtual reality and augmented reality platforms. As we saw <a href="https://www.darioriccio.com/en/immersive-cinema-is-the-future/#why_virtual_reality_is_not_popular" target="_blank" rel="noreferrer noopener">in the last article</a>, these are fully inserted in the line of continuity between real and virtual world hypothesized by <em><a href="https://www.mie.utoronto.ca/faculty_staff/milgram/" target="_blank" rel="noreferrer noopener">Paul Milgram</a></em>.</p>



<p>These are clearly in the first person, as the real protagonist of the story is ourselves. Experience is given by contact (although still virtual), and by direct observation of objects and events in the ways we like best. </p>



<p>Experiential cinema will often have to be in the first person. This is unlike most of the present and past films, which instead aim to tell us stories with eyes outside the story. But ours will be a different first person: we will be the spectators, the main character. Recently, in 2016, the film <em>Hardcore</em>, directed by the Russian musician <em><a href="https://twitter.com/naishuller" target="_blank" rel="noreferrer noopener">Ilya Najšuller</a></em>, had a good success. Evidently producing cinema from a cultural background outside of it helps to take risks and innovate it.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="HARDCORE! - Trailer Ufficiale Italiano | HD" width="678" height="381" src="https://www.youtube.com/embed/HEQB_rl87JI?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</div><figcaption class="wp-element-caption">Italian trailer of the movie &#8220;Hardcore&#8221;</figcaption></figure>



<p>The whole <em>Hardcore</em> is shot in first person. I believe the success is well deserved, and it is a good point of reference for the cinema to come. A way of narrating video games, which invites us into someone else&#8217;s life, looking at the world through his eyes. Well, I think the only difference is that it will have to be our life, inserted in the new cinema. I know, it&#8217;s scary, but the writers shouldn&#8217;t tell us that either &#8230;</p>



<p>I also want to mention another film, not entirely in first person (the protagonist can be seen in various scenes, removing the effect of total identification) but which has a decidedly more constructed and engaging plot than Hardcore: <em><a href="https://en.wikipedia.org/wiki/Enter_the_Void" target="_blank" rel="noreferrer noopener">Enter the Void</a></em>, from 2009, directed by Argentine director <a href="https://www.instagram.com/gasparnoeofficial/?hl=it" target="_blank" rel="noreferrer noopener"><strong>Gaspar Noé</strong></a>.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-4-3 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="Enter the Void: trailer italiano" width="678" height="509" src="https://www.youtube.com/embed/LkBBWLIncZM?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</div><figcaption class="wp-element-caption">Enter the Void Italian Trailer</figcaption></figure>



<p>Two films that I recommend you see, first because it will be a special and different experience. Second, to get a taste (albeit reduced) of what cinema will be like in a few years.</p>



<h4 class="wp-block-heading">To identify everyone in the history of experiential cinema</h4>



<p><strong>How can you build a film for many, which can realistically represent the life of each of them?</strong> Once again, the writers will have to do a great job of multiple introspection. By creating engaging but generalist stories, inserting characters who will also be new to the protagonist in the story. No known relatives, perhaps a distant cousin we didn&#8217;t know we had. Nothing more.</p>


<div class="wp-block-image">
<figure class="alignleft size-medium"><img loading="lazy" decoding="async" width="273" height="300" src="https://www.darioriccio.com/wp-content/uploads/2022/09/DUXU-273x300.jpg" alt="Logo of the DUXU of Los Angeles." class="wp-image-578" srcset="https://www.darioriccio.com/wp-content/uploads/2022/09/DUXU-273x300.jpg 273w, https://www.darioriccio.com/wp-content/uploads/2022/09/DUXU.jpg 652w" sizes="(max-width: 273px) 100vw, 273px" /></figure></div>


<p>In 2015, on the occasion of the fourth international conference on <strong>Design</strong>, <strong>User Experience</strong> and <strong>Usability</strong> in Los Angeles, <a href="https://en.wikipedia.org/wiki/Aaron_Marcus" target="_blank" rel="noreferrer noopener"><em>Aaron Marcus</em></a> collected in a very <a href="https://books.google.ch/books?id=fLU0CgAAQBAJ&amp;printsec=frontcover&amp;hl=it#v=onepage&amp;q&amp;f=false" target="_blank" rel="noreferrer noopener">useful book</a> a series of information on the current relationship between computer science, the virtual world and human beings. Based on the assumption that each of us has our own culture and knowledge, <strong>the design</strong> (and therefore who creates it) must leave us the freedom to experiment, to have doubts and thoughts, <strong>it must build a world accepted by all the personal cultures of the spectators</strong> and, at the same time, propose his own idea of the universe.</p>



<p>We want and need to reflect, to understand. And this absurdly leads to the <em>hyperrealism</em>, that is, to a construction of the virtual world that is very faithful to the real world, in which to experience and be present in the first person.</p>



<h2 class="wp-block-heading">The importance of sociality in the cinema of the future</h2>



<p>We have seen, speaking of the first-person perspective, that technology now allows us to decide a place, at a given historical moment, and live virtually in it. But one thing will be important: <strong>we will have to live it with others</strong>.</p>


<div class="wp-block-image">
<figure class="alignright size-medium"><img loading="lazy" decoding="async" width="300" height="200" src="https://www.darioriccio.com/wp-content/uploads/2022/09/sociality-300x200.jpg" alt="Group of kids sitting on a sofa." class="wp-image-580" srcset="https://www.darioriccio.com/wp-content/uploads/2022/09/sociality-300x200.jpg 300w, https://www.darioriccio.com/wp-content/uploads/2022/09/sociality.jpg 640w" sizes="(max-width: 300px) 100vw, 300px" /></figure></div>


<p>Sociality is a theme that should not be underestimated: <strong>there is no life alone</strong>. To reconstruct the real in the virtual, other human beings (and living beings in general) must therefore necessarily be present. And a concrete interaction with them.</p>



<p>Basically, reality is an inherent concept in our mind. Many things are real to you and me, while others are only real to one of us. The &#8220;mediated&#8221; reality has an inherent complexity in this, mainly because it must be able to reproduce a synthesis of our personal realities.</p>



<p>Let me explain better with an example: <em>Aurora</em> has a habit of calling her mother in case of problems. In her reality, her mother is always present. He gives her advice, embraces her, offers her constant support. <em>Marco</em>, on the other hand, has a bad relationship with his mother. She has always had problems, she has tried for years to help her but to no avail.</p>



<p>Aurora&#8217;s reality requires a loving and very present mother, on the contrary that of Marco rejects her. <strong>The writers of the new cinema will therefore have to be able to fall into hyperrealism, without causing moral incidents with any of the spectators.</strong> How is this resolved? Important characters may, or perhaps should, be ambiguous. An ambiguous character that allows everyone to &#8220;accept&#8221; it by mixing their own inner reality with that reproduced and therefore &#8220;mediated&#8221;. In short, see them as you wish.</p>



<p>Ultimately, the &#8220;static&#8221; part of the real world, such as trees and houses, is always there. And easily reproducible. On the other hand, the human part, the social part, represents a definitely more complex choice, even if not insuperable.</p>
<p>L'articolo <a href="https://www.darioriccio.com/en/what-is-experiential-cinema/">What is experiential cinema</a> è apparso primo per <a href="https://www.darioriccio.com/en/">Future of Cinema - by Dario Riccio</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.darioriccio.com/en/what-is-experiential-cinema/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>Immersive cinema is the future</title>
		<link>https://www.darioriccio.com/en/immersive-cinema-is-the-future/</link>
					<comments>https://www.darioriccio.com/en/immersive-cinema-is-the-future/#comments</comments>
		
		<dc:creator><![CDATA[Dario Riccio]]></dc:creator>
		<pubDate>Fri, 15 Jul 2022 11:27:00 +0000</pubDate>
				<category><![CDATA[Cinema]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[cinema]]></category>
		<category><![CDATA[feel]]></category>
		<category><![CDATA[hearing]]></category>
		<category><![CDATA[immersive cinema]]></category>
		<category><![CDATA[immersiveness]]></category>
		<category><![CDATA[innovation]]></category>
		<guid isPermaLink="false">https://www.darioriccio.com/immersive-cinema-is-the-future/</guid>

					<description><![CDATA[<p>What do Douglas Trumbull, Matthew Shifrin and many others have in common? They want to make cinema more immersive. To save the movie theater from the slow agony it is experiencing.</p>
<p>L'articolo <a href="https://www.darioriccio.com/en/immersive-cinema-is-the-future/">Immersive cinema is the future</a> è apparso primo per <a href="https://www.darioriccio.com/en/">Future of Cinema - by Dario Riccio</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p class="wp-block-seriously-simple-podcasting-audio-player"><span>
<audio class="wp-audio-shortcode" id="audio-0-1" style="width:100%" controls><source type="audio/mpeg" src="https://www.darioriccio.com/podcast-player/435/il-cinema-immersivo-e-il-futuro.mp3?_=1"/>https://www.darioriccio.com/podcast-player/435/il-cinema-immersivo-e-il-futuro.mp3</audio></span></p>



<p>Current cinema, a modern derivation of the ancient analog, has five characteristics: <em>static</em> and <em>linear structure</em>, <em>dual mode</em>, <em>episodic</em>, mainly in <em>third person</em> and <em>not interactive</em>. We will see later what each of these voices means, while I continue to think about <strong>what the cinema of the future will be</strong>, about immersive cinema, and how to make a good prototype.</p>



<p>I started this article on a Swiss train, and finished it at Barcelona&#8217;s charming <a href="https://www.google.com/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=&amp;cad=rja&amp;uact=8&amp;ved=2ahUKEwjy5aWB1_r4AhXd7rsIHWstCiwQFnoECAsQAQ&amp;url=https%3A%2F%2Fwww.instagram.com%2Fespaijoliu%2F&amp;usg=AOvVaw0H7Z4m6TmIklGbELvZOLC-" target="_blank" rel="noreferrer noopener">Espai Joliu</a>. I love trains, being able to work or relax while moving between two locations. Among my corporate benefits is the general season ticket for all means of transport in Switzerland: my luck. A few days off from work, I take the opportunity to reach Domodossola (the first Italian town across the border), or wander the length and breadth of the country of the Helvetians, which reserves surprises and magical corners almost everywhere.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="755" src="https://www.darioriccio.com/wp-content/uploads/2022/07/switzerland-ge24d15d39_1920-1024x755.jpg" alt="Trains at Spiez station, Switzerland." class="wp-image-411" srcset="https://www.darioriccio.com/wp-content/uploads/2022/07/switzerland-ge24d15d39_1920-1024x755.jpg 1024w, https://www.darioriccio.com/wp-content/uploads/2022/07/switzerland-ge24d15d39_1920-300x221.jpg 300w, https://www.darioriccio.com/wp-content/uploads/2022/07/switzerland-ge24d15d39_1920-768x566.jpg 768w, https://www.darioriccio.com/wp-content/uploads/2022/07/switzerland-ge24d15d39_1920-1536x1132.jpg 1536w, https://www.darioriccio.com/wp-content/uploads/2022/07/switzerland-ge24d15d39_1920.jpg 1920w" sizes="(max-width: 1024px) 100vw, 1024px" /><figcaption class="wp-element-caption">Trains in Spiez, Switzerland</figcaption></figure>



<h2 class="wp-block-heading">How important is immersion in the cinema of the future</h2>



<p>I am part of the line of thought, now increasingly in vogue, which imagines a single evolution for cinema: <strong><em>immersive cinema</em></strong>. According to purists, or &#8220;traditionalists&#8221;, cinema must be left as it is. But we have <a href="https://www.darioriccio.com/2022/02/02/innoviamo-il-cinema-moderno/" target="_blank" rel="noreferrer noopener">already seen</a> that, after all, they are not necessarily right.</p>



<p>As you may have noticed, I am not at all cadenced in publishing articles on my blog. I don&#8217;t like online editorial plans, I prefer quality over quantity, avoiding in-depth articles only to come out &#8220;Friday at 7&#8221;. This article took me days of research, maybe more, and clearly blogging is not my primary activity, but it has resulted in a roundup of ideas and characters that I hope will be a definitive answer to the question: <em>how will the cinema of the future be?</em></p>



<p>Few have seriously tried to innovate it, and in this article we will get to know some of them. Certainly not for lack of a market. Rather for lack of resources, lately more and more projected to the lucrative, and more manageable, virtual world.</p>



<p><em>This article follows from:</em> <a href="https://www.darioriccio.com/2022/02/02/innoviamo-il-cinema-moderno/">Innovating modern cinema</a></p>



<h2 class="wp-block-heading" id="vantaggi-e-svantaggi-del-cinema-moderno">Advantages and disadvantages of modern cinema</h2>


<div class="wp-block-image">
<figure class="alignright size-medium"><img loading="lazy" decoding="async" width="300" height="214" src="https://www.darioriccio.com/wp-content/uploads/2022/07/mobile-phone-g260c3d7f7_640-300x214.jpg" alt="Girl watching a movie with smartphone." class="wp-image-415" srcset="https://www.darioriccio.com/wp-content/uploads/2022/07/mobile-phone-g260c3d7f7_640-300x214.jpg 300w, https://www.darioriccio.com/wp-content/uploads/2022/07/mobile-phone-g260c3d7f7_640-210x150.jpg 210w, https://www.darioriccio.com/wp-content/uploads/2022/07/mobile-phone-g260c3d7f7_640.jpg 640w" sizes="(max-width: 300px) 100vw, 300px" /></figure></div>


<p>I reflected on the advantages of cinema as it is conceived today:</p>



<ul class="wp-block-list">
<li>It can be used on the move, even here on the train.</li>



<li>Other actions can be taken, delegating the mere role of companion to the audiovisual.</li>



<li>It can be used in company.</li>
</ul>



<p><strong>I ask you for other advantages</strong>, I really want you to comment. <strong>Turning to the disadvantages:</strong></p>



<ul class="wp-block-list">
<li>The methods of use have been updated little for decades.</li>



<li>It is not able to &#8220;fill&#8221; all the senses of the viewer.</li>
</ul>



<p>So let&#8217;s create a system, using already existing technology, able to minimize these disadvantages.</p>



<h2 class="wp-block-heading" id="analisi-dei-vantaggi-e-svantaggi-del-cinema-moderno">Analysis of the advantages and disadvantages of modern cinema</h2>



<h3 class="wp-block-heading" id="1-si-puo-usufruire-del-cinema-in-mobilita">1) You can watch cinema on the go</h3>



<p>Clearly, “old style” cinema by definition is destined for theaters. However, in order to recover costs, a production must be usable by as many people as possible. A good substitute for the small screen are virtual reality viewers, which allow you to view stereoscopic 360 ° videos.</p>



<h3 class="wp-block-heading" id="2-si-puo-fare-altro-mentre-si-guarda-un-film">2) You can do other things while watching a movie</h3>



<p>Here virtual reality, at present, shows all its limits due to the complete isolation it creates. Bypassing this limit is perhaps possible with augmented reality, or with a serious upgrade of existing virtual reality technology.</p>



<h3 class="wp-block-heading" id="3-si-puo-guardare-un-film-in-gruppo">3) You can watch a movie in company</h3>



<p>By designing an update of the cinema system intended as a &#8220;cinema&#8221;, for decades there has been a technology used above all in scientific dissemination: the <strong>fulldome</strong>, ie projections in a dome. However, the same system is not new to film experimentation, even by great masters of the past.</p>



<h3 class="wp-block-heading" id="4-le-modalita-di-fruizione-si-aggiornano-poco-da-decenni">4) The methods of use have been updated little for decades</h3>



<p>The first disadvantage of cinema is rather an observation. If, on the one hand, great directors or companies have brought immense innovations in shooting methods (think of<strong><a href="https://www.mymovies.it/film/2009/avatar/news/tutteleinnovazionidelfilm/" target="_blank" rel="noreferrer noopener">Avatar</a></strong> di <em><a href="https://it.wikipedia.org/wiki/James_Cameron" target="_blank" rel="noreferrer noopener">Cameron</a></em> con una <a href="https://www.mo-sys.com/what-is-motion-capture-and-how-does-it-work/" target="_blank" rel="noreferrer noopener">Motion Capture</a> all&#8217;ennesima potenza, o alla <a href="https://www.ilm.com/stagecraft/" target="_blank" rel="noreferrer noopener"><strong>Industrial Light &amp; Magic</strong></a> che insieme ad <strong><a href="https://www.unrealengine.com/en-US/virtual-production" target="_blank" rel="noreferrer noopener">Epic</a></strong> ha dato il là ad una seria Virtual Production), poco si è fatto per le sale.</p>



<p><strong>The transition from film to digital</strong>, without wanting to diminish it, <strong>is comparable to the change of our home TV.</strong> Best quality, lower distribution costs, maximum <a href="https://it.wikipedia.org/wiki/Cinema_tridimensionale" target="_blank" rel="noreferrer noopener">stereoscopy</a> but otherwise everything as before.</p>



<h3 class="wp-block-heading" id="5-non-e-in-grado-di-riempire-il-maggior-numero-di-sensi-dello-spettatore-un-secolo-fa-grande-innovatore-il-cinema-di-oggi-non-riesce-ad-offrire-quel-qualcosa-in-piu-rispetto-ad-altri-strumenti-piu-moderni-un-esempio-guardando-un-ipad-o-la-tv-il-campo-visivo-coperto-dall-immagine-e-di-circa-25-lo-schermo-cinematografico-in-media-copre-un-campo-visivo-di-50-su-360-dipende-molto-dalla-posizione-in-sala-e-necessario-incrementarlo-se-davvero-si-vuole-offrire-ancora-quell-effetto-wow-che-vale-il-prezzo-del-biglietto-se-davvero-vogliamo-far-sentire-gli-spettatori-immersi-nella-proiezione-nell-ambiente-rappresentato-l-acustica-e-sempre-stata-molto-buona-i-sistemi-dolby-digital-attuali-gia-permettono-un-ottima-immersivita-cio-che-manca-e-ancora-il-tatto-l-olfatto-e-perche-no-anche-il-gusto">5) It is not able to &#8220;fill&#8221; all the senses of the viewer.</h3>



<p id="5-non-e-in-grado-di-riempire-il-maggior-numero-di-sensi-dello-spettatore-un-secolo-fa-grande-innovatore-il-cinema-di-oggi-non-riesce-ad-offrire-quel-qualcosa-in-piu-rispetto-ad-altri-strumenti-piu-moderni-un-esempio-guardando-un-ipad-o-la-tv-il-campo-visivo-coperto-dall-immagine-e-di-circa-25-lo-schermo-cinematografico-in-media-copre-un-campo-visivo-di-50-su-360-dipende-molto-dalla-posizione-in-sala-e-necessario-incrementarlo-se-davvero-si-vuole-offrire-ancora-quell-effetto-wow-che-vale-il-prezzo-del-biglietto-se-davvero-vogliamo-far-sentire-gli-spettatori-immersi-nella-proiezione-nell-ambiente-rappresentato-l-acustica-e-sempre-stata-molto-buona-i-sistemi-dolby-digital-attuali-gia-permettono-un-ottima-immersivita-cio-che-manca-e-ancora-il-tatto-l-olfatto-e-perche-no-anche-il-gusto">A century ago, a great innovator, today&#8217;s cinema fails to offer that &#8220;something more&#8221; compared to other more modern tools.</p>



<p id="5-non-e-in-grado-di-riempire-il-maggior-numero-di-sensi-dello-spettatore-un-secolo-fa-grande-innovatore-il-cinema-di-oggi-non-riesce-ad-offrire-quel-qualcosa-in-piu-rispetto-ad-altri-strumenti-piu-moderni-un-esempio-guardando-un-ipad-o-la-tv-il-campo-visivo-coperto-dall-immagine-e-di-circa-25-lo-schermo-cinematografico-in-media-copre-un-campo-visivo-di-50-su-360-dipende-molto-dalla-posizione-in-sala-e-necessario-incrementarlo-se-davvero-si-vuole-offrire-ancora-quell-effetto-wow-che-vale-il-prezzo-del-biglietto-se-davvero-vogliamo-far-sentire-gli-spettatori-immersi-nella-proiezione-nell-ambiente-rappresentato-l-acustica-e-sempre-stata-molto-buona-i-sistemi-dolby-digital-attuali-gia-permettono-un-ottima-immersivita-cio-che-manca-e-ancora-il-tatto-l-olfatto-e-perche-no-anche-il-gusto">When looking at an iPad or modern TV, the field of view covered by the image is approximately 25°. The cinema screen, on average, covers a field of view of 50° on the 360 of the sphere (it depends a lot on the position in the room). It is necessary to increase it, if you really want to return to offer that &#8220;wow&#8221; effect, due to the immersion, which is worth the trip and the ticket.</p>



<p id="5-non-e-in-grado-di-riempire-il-maggior-numero-di-sensi-dello-spettatore-un-secolo-fa-grande-innovatore-il-cinema-di-oggi-non-riesce-ad-offrire-quel-qualcosa-in-piu-rispetto-ad-altri-strumenti-piu-moderni-un-esempio-guardando-un-ipad-o-la-tv-il-campo-visivo-coperto-dall-immagine-e-di-circa-25-lo-schermo-cinematografico-in-media-copre-un-campo-visivo-di-50-su-360-dipende-molto-dalla-posizione-in-sala-e-necessario-incrementarlo-se-davvero-si-vuole-offrire-ancora-quell-effetto-wow-che-vale-il-prezzo-del-biglietto-se-davvero-vogliamo-far-sentire-gli-spettatori-immersi-nella-proiezione-nell-ambiente-rappresentato-l-acustica-e-sempre-stata-molto-buona-i-sistemi-dolby-digital-attuali-gia-permettono-un-ottima-immersivita-cio-che-manca-e-ancora-il-tatto-l-olfatto-e-perche-no-anche-il-gusto">The acoustics are already very good; Dolby Digital systems allow excellent immersion. What is missing is still the <strong>feel</strong>, the <strong>smell</strong> and, why not, also the <strong>taste</strong>.</p>



<h2 class="wp-block-heading" id="perche-la-realta-virtuale-non-funziona">Why virtual reality is not popular</h2>



<p>I&#8217;ve been trying to figure out why you never want to put a helmet on, and a recent University of Glasgow <a href="http://eprints.gla.ac.uk/271016/1/271016.pdf" target="_blank" rel="noreferrer noopener">paper / survey</a> by <a href="https://it.linkedin.com/in/laurabajor" target="_blank" rel="noreferrer noopener">Laura Bajorunaite</a>, <a href="http://www.dcs.gla.ac.uk/~stephen/aboutme.shtml" target="_blank" rel="noreferrer noopener">Stephen Brewster</a> and <a href="https://www.gla.ac.uk/schools/computing/staff/juliewilliamson/#researchinterests,publications,articles" target="_blank" rel="noreferrer noopener">Julie R. Williamson</a>certainly helped.</p>



<p>The paper deals specifically with the use of VR viewers in public transport, which in itself is a place of frequent cinematic use. But the discourse is expandable, with the necessary adaptations, a little more generally.</p>



<p>At the base there are clearly reasons of personal <strong>safety</strong>, but also of <strong>comfort</strong>. Social acceptance is also still a long way off, making VR users &#8220;stupid&#8221; in the eyes of other passengers. Progress is still being made to make one&#8217;s virtual world more integrated with reality.</p>



<p>For example, occasionally throwing an eye to the real world with the cameras installed on the viewer, or having indications on the positioning of other human beings in real space, can reassure us. A complete Virtual Reality must be avoided in favor of <strong>Augmented Virtuality</strong>; as already defined in 1999 by Prof. <em><a href="https://www.mie.utoronto.ca/faculty_staff/milgram/" target="_blank" rel="noreferrer noopener">Paul Milgram</a></em> in its line of continuity from real world to virtual world.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="296" src="https://www.darioriccio.com/wp-content/uploads/2022/07/Virtual-Reality-Continuum-1024x296.jpg" alt="" class="wp-image-374" srcset="https://www.darioriccio.com/wp-content/uploads/2022/07/Virtual-Reality-Continuum-1024x296.jpg 1024w, https://www.darioriccio.com/wp-content/uploads/2022/07/Virtual-Reality-Continuum-300x87.jpg 300w, https://www.darioriccio.com/wp-content/uploads/2022/07/Virtual-Reality-Continuum-768x222.jpg 768w, https://www.darioriccio.com/wp-content/uploads/2022/07/Virtual-Reality-Continuum-1536x445.jpg 1536w, https://www.darioriccio.com/wp-content/uploads/2022/07/Virtual-Reality-Continuum.jpg 1624w" sizes="(max-width: 1024px) 100vw, 1024px" /><figcaption class="wp-element-caption">Virtual Reality Continuum, by Paul Milgram</figcaption></figure>



<p>I invite you to read the paper to learn more, but the concept we must take into account is that total immersion is a problem.</p>



<h2 class="wp-block-heading">Immersive cinema</h2>


<div class="wp-block-image">
<figure class="alignleft size-medium"><img loading="lazy" decoding="async" width="300" height="276" src="https://www.darioriccio.com/wp-content/uploads/2022/07/Transportation__Travel_Pavilion-300x276.jpg" alt="New York World's Fair of 1964" class="wp-image-378" srcset="https://www.darioriccio.com/wp-content/uploads/2022/07/Transportation__Travel_Pavilion-300x276.jpg 300w, https://www.darioriccio.com/wp-content/uploads/2022/07/Transportation__Travel_Pavilion.jpg 640w" sizes="(max-width: 300px) 100vw, 300px" /><figcaption class="wp-element-caption">New York World&#8217;s Fair of 1964, with <em>To The Moon and Beyond</em>&#8216;s &#8220;Moon Dome&#8221; in the foreground. Photo by Doug Coldwell from Wikipedia.</figcaption></figure></div>


<p>We have already covered the history of fulldome cinema in detail (article <a href="https://www.darioriccio.com/2022/03/05/video-fulldome-il-cinema-del-futuro/" target="_blank" rel="noreferrer noopener">available here</a>), but <em><a href="https://en.wikipedia.org/wiki/To_the_Moon_and_Beyond" target="_blank" rel="noreferrer noopener">To the Moon and Beyond</a></em>, is considered by many to be the first immersive film in history. The first to cover a 360 ° field of view, filmed with the <a href="https://it.wikipedia.org/wiki/Cinerama" target="_blank" rel="noreferrer noopener">Cinerama</a> technique using a single fish-eye lens, on 70mm film at 18 fps, and projected onto the roof of a large dome about 30 meters high.</p>



<p>If its use was still somehow &#8220;didactic&#8221;, we moved away from the astronomical canons to bring instead a greater involvement towards the issues of transport. The film also influenced Stanley Kubrick, hiring its creators as VFX consultants for the legendary <em>2001: A Space Odyssey.</em> <a href="https://en.wikipedia.org/wiki/Lester_Novros" target="_blank" rel="noreferrer noopener">Lester Novros</a> and our next protagonist: <a href="https://it.wikipedia.org/wiki/Douglas_Trumbull"><em>Douglas Trumbull</em></a>.</p>



<h2 class="wp-block-heading">Douglas Trumbull, a mission: to save cinema</h2>


<div class="wp-block-image">
<figure class="alignright size-medium"><img loading="lazy" decoding="async" width="200" height="300" src="https://www.darioriccio.com/wp-content/uploads/2022/07/Douglas_Trumbull_2007-200x300.jpg" alt="Douglas Trumbull" class="wp-image-391" srcset="https://www.darioriccio.com/wp-content/uploads/2022/07/Douglas_Trumbull_2007-200x300.jpg 200w, https://www.darioriccio.com/wp-content/uploads/2022/07/Douglas_Trumbull_2007.jpg 681w" sizes="(max-width: 200px) 100vw, 200px" /><figcaption class="wp-element-caption">Douglas Trumbull. Image by <a href="https://www.flickr.com/people/64636204@N00" target="_blank" rel="noreferrer noopener">Jorge Ferrer</a> from Wikipedia.</figcaption></figure></div>


<p><em>Douglas Trumbull</em> was a man with a great idea:<strong>movies can be more realistic and more immersive than they already are.</strong></p>



<p>I would have liked to meet him in person, he was a myth for me. I tried to write to him a couple of times, offering to visit him at his <em>Trumbull Studios</em> in the <a href="https://www.google.ch/maps/place/83+New+Marlboro-Southfield+Rd,+Great+Barrington,+MA+01230,+Stati+Uniti/@42.1079709,-73.2421418,17z/data=!3m1!4b1!4m5!3m4!1s0x89e76fc86b4078e5:0x9f728c507853d409!8m2!3d42.1079669!4d-73.2399531" target="_blank" rel="noreferrer noopener">countryside of Massachusetts</a>, but unfortunately I got no reply. Perhaps the addresses I had were no longer current, perhaps he was too busy as well as old. I don&#8217;t know, the fact is that he is no longer with us except through his studies that still have the time to change the history of cinema.</p>



<p>He was one of the greatest experts in the world of special effects, first analog and then digital. Creator of amazing graphic masterpieces in the top of last century Hollywood production (<a href="https://www.imdb.com/title/tt0062622/" target="_blank" rel="noreferrer noopener">2001: A Space Odyssey,</a> <a href="https://www.imdb.com/title/tt0075860/" target="_blank" rel="noreferrer noopener">Close Encounters of the Third Kind</a>, <a href="https://www.imdb.com/title/tt0079945/" target="_blank" rel="noreferrer noopener">Star Trek</a> &#8230;). After starting to smell the decay of traditional cinemas, he devoted his entire life to finding a solution. Which he too found in <strong>immersive cinema</strong>, as a true pioneer.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="463" src="https://www.darioriccio.com/wp-content/uploads/2022/07/1280px-2001_-_Odissea_nello_spazio-1024x463.jpg" alt="A scene from 2001: A Space Odyssey" class="wp-image-420" srcset="https://www.darioriccio.com/wp-content/uploads/2022/07/1280px-2001_-_Odissea_nello_spazio-1024x463.jpg 1024w, https://www.darioriccio.com/wp-content/uploads/2022/07/1280px-2001_-_Odissea_nello_spazio-300x136.jpg 300w, https://www.darioriccio.com/wp-content/uploads/2022/07/1280px-2001_-_Odissea_nello_spazio-768x347.jpg 768w, https://www.darioriccio.com/wp-content/uploads/2022/07/1280px-2001_-_Odissea_nello_spazio.jpg 1280w" sizes="(max-width: 1024px) 100vw, 1024px" /><figcaption class="wp-element-caption">A scene from 2001: A Space Odyssey</figcaption></figure>



<p>Perhaps due to his analogue habit of always finding physical and real ideas rather than creating a new app or a new system to monetize users&#8217; time while they are alone or at home, Douglas was a multinational craftsman. He certainly had art in his DNA: his father, Donald (Pappy Trumbull to friends) had been involved with VFX since the 1930s. Not officially accredited, he worked on The Wizard of Oz from 1939.</p>



<p>It was during the filming of 2001: A Space Odyssey that, only 25, Doug created the legendary psychedelic color stream &#8220;Slit Scan&#8221; based on the technique developed shortly before by <a href="https://en.wikipedia.org/wiki/John_Whitney_(animator)"><em>John Whitney</em></a> for <em>Vertigo</em>.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="Stanley Kubrick - 2001: A Space Odyssey, 1968 - Slit Scan Effect" width="678" height="381" src="https://www.youtube.com/embed/NaK4z3Fjpkk?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</div><figcaption class="wp-element-caption">Slit Scan effect in 2001: A Space Odyssey</figcaption></figure>



<h3 class="wp-block-heading">Immersivity is the future of cinema</h3>



<p>I report part of an interview made by <a href="https://filmmakermagazine.com/102741-interview-douglas-trumbull-2017/#.Ys1w7S8QOLc" target="_blank" rel="noreferrer noopener">Filmmakermagazine.com</a> to Trumbull himself.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>If you look at the industry, it is moving more and more towards immersive experiences. Take virtual reality, for example; the problem is that people don&#8217;t want to wear the headset. Immersiveness is what TV cannot offer; when 3D took off, everyone started making 3D TVs. But now nobody makes them anymore.</p>



<p>I think the problem was miniaturization. When you look at a 3D image on the big screen, you have a 6 foot (1.82 meters) figure standing in front of you, while when you put it on a TV screen you have like a 6 inch (15 cm) figure. When I took the projects we were working on and played them on the TV screen to see what they looked like, the effect was completely lost.</p>
<cite>Douglas Trumbull</cite></blockquote>



<h3 class="wp-block-heading">Magi Pod: innovating the cinema </h3>



<p>Douglas Trumbull in his later years developed with Trumbull Studios a system that united many of the key points for the renewal of cinemas. A redesign of both shooting and projection systems. At its core, 120fps 4K projection solves a problem Kubrick already denounced over 50 years ago: <strong>24fps blur is obnoxious and unrealistic</strong>. For this already <em>2001: A Space Odyssey</em> was shot at a high framerate, although almost no room was equipped for reproduction.</p>



<p>Trumbull wanted to bring back the magic he himself felt in visiting the curved screens of the <a href="https://it.wikipedia.org/wiki/Cinerama" target="_blank" rel="noreferrer noopener">Cinerama</a> in childhood. But technically there were many problems to be solved.</p>



<h3 class="wp-block-heading">Troubleshoot 3D projection</h3>



<p>To begin with, <strong>stereoscopic projection requires halving the framerate</strong> (or doubling the projectors, which would be uneconomical). This means 60 fps projection per eye.</p>



<p>Then there is another, more concrete problem: <strong>in 3D films both images</strong> (right and left) <strong>are recorded simultaneously</strong>. But <strong>they are then projected one after the other</strong>, alternately (right, left, right, left eye etc &#8230;). There is therefore a small difference that is not visually perceptible, but not negligible from our unconscious. Solution? <strong>Record the two images with a distance of 180 degrees / shutter</strong>, then alternately, as they will then be projected. This solves a second problem, which concerns the &#8220;cinematic look&#8221;.</p>



<p><strong>The high framerate</strong> is not in fact used in cinema because <strong>it makes the product too similar to video and TV</strong>. Trumbull then concluded to increase the flicker effect originally given by the cinematic 24 fps thanks to the recording of the left and right frames of the stereoscopy with 180 degrees / shutter distance. In addition to reducing the luminance of the screen (in the Magi equal to 14 FtL, less than 50 lux), but not reducing the gain of the screen itself.</p>



<p>The entire Magi patent aims to increase audience involvement: it translates into projection in the <em><strong>Magi Pod</strong></em>.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="Douglas Trumbull MAGI POD" width="678" height="381" src="https://www.youtube.com/embed/pX_nqcVgkJw?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</div></figure>



<p>A small cinema, <strong>70 seats</strong> ll in the best viewing area; <strong>toroidal silver screen</strong> (curved both horizontally and vertically) with 3x gain to increase contrast and therefore immersion; <strong>32 speakers</strong> for enhanced surround effect; <strong><a href="http://www.rotarywoofer.com/" target="_blank" rel="noreferrer noopener">rotary subwoofers</a></strong> capable of reaching 1 Hertz, creating a “4D” effect with the absurd vibrations.</p>



<p>Trumbull&#8217;s years of experience leading the technical departments of the biggest blockbusters, in a single, magical, immersive cinema. If knowledge must always go forward, it cannot be ignored in the design of the cinema of the future.</p>



<h3 class="wp-block-heading">Cross reflections in fulldome cinema</h3>



<p>By designing a fulldome cinema, the differences with the <em>Magi Pod</em> are many and fundamental. The Magi does not in fact fill a dome, but only a part of it. And you cannot simply &#8220;enlarge&#8221; the field of view to compose 360 degrees. There is a first big problem: <strong>cross reflexes.</strong></p>



<p>Anyone who has worked with high gain projection screens will have realized that a silver, 3X amplified screen is a kind of mirror. Reflecting both the light coming from the outside and the same image projected from one side to the other in the case of a convex screen. A problem that also experienced Steven Spielberg&#8217;s 1991 production, <em><a href="https://en.wikipedia.org/wiki/Back_to_the_Future:_The_Ride" target="_blank" rel="noreferrer noopener">Back to the Future: The Ride</a></em>; fulldome re-proposition of the original <em>Back to the Future</em>.</p>



<p>By limiting the radius, and keeping the spectators within an optimal viewing angle (therefore in central positions) the problem is reduced to almost zero. But a complete sphere must necessarily be a white screen with a reflection ratio of less than 1, or the solution is still to be studied. And in this case we will certainly have to do practical experiments in the near future. </p>



<h2 class="wp-block-heading">Immersion in the cinema of the future</h2>



<p>In fact, almost every previous study we talk about in this article is in some way linked to the digital world. But, as you may know, <strong>I am a staunch supporter of sociability and the return to real life</strong>.</p>



<p><strong>The room will have to resemble a space and time machine</strong>, where you will enter a totally new dimension for the duration of the screening, and from which you will come out with a complete experience. Above all, in company.</p>



<p>The world is heading towards the conquest of space, which will initially be within the possibilities of a select few. Others will want to try new experiences, and they will have to do so while necessarily being on the ground.</p>



<h2 class="wp-block-heading">Five senses for immersive cinema</h2>



<p>The viewer will have to be the protagonist, so <strong>the narrative will also have to change</strong>. We remember well: without disturbing the intuition, the sixth, we still have <strong>five other senses</strong>.</p>


<div class="wp-block-image">
<figure class="alignleft size-medium"><img loading="lazy" decoding="async" width="225" height="300" src="https://www.darioriccio.com/wp-content/uploads/2022/07/Henry-Jenkins-225x300.jpg" alt="Henry Jenkins" class="wp-image-402" srcset="https://www.darioriccio.com/wp-content/uploads/2022/07/Henry-Jenkins-225x300.jpg 225w, https://www.darioriccio.com/wp-content/uploads/2022/07/Henry-Jenkins.jpg 750w" sizes="(max-width: 225px) 100vw, 225px" /><figcaption class="wp-element-caption">Henry Jenkins, from <a href="http://henryjenkins.org" target="_blank" rel="noreferrer noopener">http://henryjenkins.org</a></figcaption></figure></div>


<p>Today&#8217;s dual cinema manages to cover only two of them: <em>sight</em> and <em>hearing</em>. And, especially for sight, we can do a lot to improve the situation. Reading the number of <a href="https://www.wired.it/article/wired-99-edicola-buon-divertimento/" target="_blank" rel="noreferrer noopener">Wired Italia</a> of December 2021 (magazine that is always full of interesting ideas) I found this paragraph in an article by <a href="http://henryjenkins.org/" target="_blank" rel="noreferrer noopener"><strong>Henry Jenkins</strong></a> (lecturer at <a href="https://cinema.usc.edu/mediastudies/index.cfm" target="_blank" rel="noreferrer noopener">University of Southern California</a>, director of the comparative media studies program at MIT in Boston for more than 10 years; not the last of the arrivals):</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>I am convinced that the metaverse will not be the future both because devices such as Oculus viewers are terrible (especially for those who, like me, wear glasses), and because it is not the visual experience that guarantees the highest level of immersion, but that audio and haptics, as evidenced by decades of studies and pioneers such as Metthew Shifrin and Nonny de la Peña.</p>
<cite><strong>Henry Jenkins</strong></cite></blockquote>



<p>Jenkins&#8217;s idea, similar to mine in many ways, intrigued me to get to know musician <em><a href="https://www.linkedin.com/in/matthew-shifrin-243361ba/" target="_blank" rel="noreferrer noopener">Matthew Shifrin</a></em>.</p>



<h2 class="wp-block-heading">Matthew Shifrin: Legos and the cinema of the future</h2>



<p><strong><em>Matthew Shifrin</em></strong> is a young musician, blind from birth. Since he was a child, a <strong><a href="http://lego.it" target="_blank" rel="noreferrer noopener">Lego</a></strong> enthusiast, these allow him to experiment, touch, &#8220;see&#8221; without eyes. Thanks to touch, and therefore also to Legos, he can understand how the world works; as did. According to his eloquent example, <strong>he will never know what the Statue of Liberty is like without climbing it</strong>. And climbing the Statue of Liberty can be problematic&#8230; Therefore, a small Lego statue perfectly fulfills its function.</p>



<p>It is his thirteenth year of life. At the door rings <em>Lilya</em>, family friend. She approaches Matthew with light steps, looks at him and says cheerfully: “I have something for you!”. Matthew, confused, smiles at her as the woman places a package in her hands. The child opens it, touching a sheet entirely written in <a href="https://it.wikipedia.org/wiki/Braille" target="_blank" rel="noreferrer noopener">braille</a>:  <strong>these are the instructions for building Lego structures by himself.</strong>. His eyes shine, he is so happy. And he wants to get to work right away.</p>



<h3 class="wp-block-heading">Lego for the blind</h3>



<p>The story leads him to found, years later, &#8220;<a href="http://legofortheblind.com" target="_blank" rel="noreferrer noopener">Lego for the blind</a>&#8220;. An association, with its website, where you can download instructions like these for about thirty Lego products. And convinces the same company to include them in new products. He also created a system to help the blind to climb the rocks, really brilliant.</p>



<p>Definitely an interesting character; but you may be wondering why we talk about him in an article on the cinema of the future. I understand you, I was so taken by his story that for a moment I did it too. Then I remembered where I wanted to go.</p>



<p>In his <a href="https://sites.suffolk.edu/suffolktedx/" target="_blank" rel="noreferrer noopener"><strong>TEDx Talk</strong> at <strong>Suffolk University</strong></a>, starting at minute six, he explains it well and in a very funny way.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="Lego for the Blind | Matthew Shifrin | TEDxSuffolkUniversity" width="678" height="381" src="https://www.youtube.com/embed/Ls6JdUdogLQ?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</div></figure>



<h3 class="wp-block-heading">Blind people in the cinema of the future</h3>



<p>The point is: <strong>if the world manages to be immersive for a blind person, cinema can never be immersive in a similar or equal way to the real world if it is not for a blind person</strong>. The fact that the cinema of the future can be enjoyed in a better way also by people with disabilities, can only be an excellent and stimulating side effect.</p>



<p>While studying singing and accordion at the <a href="https://necmusic.edu" target="_blank" rel="noreferrer noopener">New England Conservatory</a> in Boston, he continues to look for ways to make modern life more suitable for people like him.</p>



<p>In episode 3 of his <a href="https://radiotopiapresents.fm/blind-guy-travels" target="_blank" rel="noreferrer noopener">Blind Guy Travels</a> podcast, Matthew describes going to the movies with his friend Ben. The need to have Ben to describe the visual part of every single scene, and even thus not have a complete idea of the whole, leads him to create with his university a tactile device for the cinema.</p>



<p>With the support of the New England Conservatory&#8217;s <em>Entrepreneurial Musicianship Department</em>, Matt and his business partner have developed a <strong>programmable vibrating vest</strong> integrated with technology that mimics the sensations of overturning, flying and falling. This allows a new audience to effectively follow the Marvel <em>Daredevil</em> saga, a blind lawyer who becomes a superhero at night.</p>



<p>A few days ago I wrote an email to the conservatory asking for more details on this system, unfortunately without receiving an answer. I plan to comment on this article if I have any news in the future, it is interesting.</p>



<h2 class="wp-block-heading">We will talk about experiential cinema</h2>


<div class="wp-block-image">
<figure class="alignleft size-medium"><img loading="lazy" decoding="async" width="200" height="300" src="https://www.darioriccio.com/wp-content/uploads/2022/07/pexels-cottonbro-8261572-2-200x300.jpg" alt="Girl at the cinema with 3D glasses." class="wp-image-422" srcset="https://www.darioriccio.com/wp-content/uploads/2022/07/pexels-cottonbro-8261572-2-200x300.jpg 200w, https://www.darioriccio.com/wp-content/uploads/2022/07/pexels-cottonbro-8261572-2.jpg 640w" sizes="(max-width: 200px) 100vw, 200px" /></figure></div>


<p>I have ready an analysis of the characteristics of current cinema, and of how they evolve in experiential cinema. How the latter is interactive, immersive, algorithmic and other important studies for the purpose of creating our cinema of the future.</p>



<p>But I decided to end the article here, it was getting too long and dispersed. I have already copied the text into a new WordPress post, which I will complete as soon as possible and publish.</p>



<p>I renew the invitation to comment, to share doubts and criticisms, and above all to participate in this innovation if you like it. You can also write to me, my address is <a href="mailto:dario.riccio@gmail.com">dario.riccio@gmail.com</a>. See you soon!</p>
<p>L'articolo <a href="https://www.darioriccio.com/en/immersive-cinema-is-the-future/">Immersive cinema is the future</a> è apparso primo per <a href="https://www.darioriccio.com/en/">Future of Cinema - by Dario Riccio</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.darioriccio.com/en/immersive-cinema-is-the-future/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>The future of technology in art</title>
		<link>https://www.darioriccio.com/en/the-future-of-technology-in-art/</link>
					<comments>https://www.darioriccio.com/en/the-future-of-technology-in-art/#comments</comments>
		
		<dc:creator><![CDATA[Dario Riccio]]></dc:creator>
		<pubDate>Wed, 09 Feb 2022 19:36:58 +0000</pubDate>
				<category><![CDATA[Technology]]></category>
		<category><![CDATA[3d]]></category>
		<category><![CDATA[innovation]]></category>
		<category><![CDATA[visual arts]]></category>
		<guid isPermaLink="false">https://www.darioriccio.com/2022/02/09/the-future-of-technology-in-art/</guid>

					<description><![CDATA[<p>I come from an interactive exhibition on Deep Fake in the arts, hosted in the pavilions of the Polytechnic University of Lausanne (EPFL). This Swiss institution, which in 2019 developed the technology for the recognition of &#8220;human&#8221; deepfakes, is a world-class excellence in the field of science and technology. He wants to help us understand [&#8230;]</p>
<p>L'articolo <a href="https://www.darioriccio.com/en/the-future-of-technology-in-art/">The future of technology in art</a> è apparso primo per <a href="https://www.darioriccio.com/en/">Future of Cinema - by Dario Riccio</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p class="wp-block-seriously-simple-podcasting-audio-player"><span>
<audio class="wp-audio-shortcode" id="audio-0-1" style="width:100%" controls><source type="audio/mpeg" src="https://www.darioriccio.com/podcast-player/207/il-futuro-della-tecnologia-nellarte.mp3?_=1"/>https://www.darioriccio.com/podcast-player/207/il-futuro-della-tecnologia-nellarte.mp3</audio></span></p>



<p>I come from an interactive exhibition on Deep Fake in the arts, hosted in the pavilions of the <a href="https://epfl-pavilions.ch/" target="_blank" rel="noreferrer noopener">Polytechnic University of Lausanne (EPFL)</a>. This Swiss institution, which <a href="https://actu.epfl.ch/news/epfl-develops-solution-for-detecting-deepfakes/" target="_blank" rel="noreferrer noopener">in 2019 developed the technology for the recognition of &#8220;human&#8221; deepfakes,</a> is a world-class excellence in the field of science and technology. He wants to help us understand the future of technology in art.</p>



<p>The theme is very topical; digital allows you to create artistic fakes that are almost indistinguishable from the original. But it also allows <strong>the use of the works away from their places of origin</strong>, to circumvent political or security problems. And it is on this that the organizers have concentrated, to evaluate the ability of a copy to cause lasting emotions over time.</p>



<h2 class="wp-block-heading" id="gli-strumenti-della-tecnologia-nell-arte">The tools of technology in art</h2>



<p><strong>In a modern exhibition, interactivity is everything</strong>. For years I have supported the &#8220;museum&#8221;, intended as large rooms / parking of objects, dead in most cases. Exceptions are some masterpieces that still today attract thousands if not millions of experts and onlookers for their fame, but this is not the norm. In Switzerland, over 75% of museums do not reach 5,000 visitors a year. But even in Italy, attendance has increased in recent years (excluding the pandemic period). This suggests that we are moving in the right direction to modernize the service offered.</p>



<p>The tools used to add technology to the world of art are mainly <strong>touch screens</strong>, <strong>holograms</strong>, <strong>mixed</strong>, <strong>augmented</strong> and <strong>virtual reality</strong>.</p>



<h3 class="wp-block-heading" id="proiezione-3d-multi-view-o-multi-spettatore">Multi-view, or multi-viewer 3D projection</h3>


<div class="wp-block-image">
<figure class="alignleft size-large is-resized"><img loading="lazy" decoding="async" src="https://www.darioriccio.com/wp-content/uploads/2022/02/multi-view-projection-768x1024.jpeg" alt="" class="wp-image-70" width="384" height="512" srcset="https://www.darioriccio.com/wp-content/uploads/2022/02/multi-view-projection-768x1024.jpeg 768w, https://www.darioriccio.com/wp-content/uploads/2022/02/multi-view-projection-225x300.jpeg 225w, https://www.darioriccio.com/wp-content/uploads/2022/02/multi-view-projection.jpeg 900w" sizes="(max-width: 384px) 100vw, 384px" /><figcaption class="wp-element-caption">Multi-view screen with 3D glasses and 6 DOF tracking</figcaption></figure></div>


<p>Mea culpa, I was not aware of a beautiful technology: multi-view, or multi-viewer, 3D projection. Operation is simple on paper, albeit technically complex due to the huge amount of data to be processed. A video projector, specifically the <em><a href="https://www.digitalprojection.com/emea/dp-projectors/insight-4k-hfr-360/" target="_blank" rel="noreferrer noopener">Insight 4K HFR 360</a></em> (toy worth over $ 300,000), projects 360 frames per second in 4K resolution. This exceptional framerate, combined with active 3D glasses (obviously synchronized with that framerate), allows for the simultaneous reproduction of 3 different stereoscopic videos, giving 60 fps to each single eye.</p>



<p>I tried it and, <strong>apart from the inherent discomfort in wearing active glasses for a user of &#8220;normal&#8221; glasses</strong> like me, the vision is perfect.</p>



<p>To further improve the situation, <a href="https://it.wikipedia.org/wiki/6dof" target="_blank" rel="noreferrer noopener">6DOF</a> tracking (with six degrees of freedom, understands where we are looking and where we are in the room). System similar, if not the same, to those used for the HTC Vive VR headsets (or the old Oculus). In simple terms, the 3D video changes perspective as we move around the room, to achieve the effect of &#8220;seeing behind&#8221; the projected objects. A change of parallax, copied from virtual reality. And here too, a nice way to add technology to art.</p>



<p>A note on the protagonist: the digitization with laser and scanner techniques of the <a href="https://it.wikipedia.org/wiki/Abbazia_di_Monte_San_Michele" target="_blank" rel="noreferrer noopener">Abbey of Monte San Michele</a> in <a href="https://it.wikipedia.org/wiki/Bamberga" target="_blank" rel="noreferrer noopener">Bamburg</a> (Germany). Closed in 2012 to be restored following major structural damage, it can therefore be virtually visited.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="INSIGHT 4K HFR 360 Highlights" width="678" height="381" src="https://www.youtube.com/embed/O3X95xzFs4o?start=5&#038;feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</div><figcaption class="wp-element-caption">Video from productor website</figcaption></figure>



<h3 class="wp-block-heading" id="fotogrammetria-3d-automatica-con-il-cultarm3d">Automatic 3D photogrammetry with CultArm3D</h3>



<figure class="wp-block-video alignleft"><video controls muted src="https://www.darioriccio.com/wp-content/uploads/2022/02/cultarm3d-2.mp4"></video></figure>



<p>In an increasingly automated world, where humans are needed less and less (does this seem like a problem to you?), It is no longer impressive to see a robotic arm photographing a mask far and wide to recreate it in 3D. It&#8217;s the<a href="https://www.cultarm3d.de/" target="_blank" rel="noreferrer noopener">CultArm3D</a>.</p>



<p>It is specifically a 3D reproduction of the Samurai mask from the Edo period. And this is the world&#8217;s first scanner capable of producing a perfect 3D copy using automatic 3D photogrammetry. Produced by the research of the <a href="https://www.igd.fraunhofer.de/en" target="_blank" rel="noreferrer noopener">Fraunhofer Institute for Computer Graphics</a>, of <em>Darmstadt</em> (Germany), it is able to calculate the size of the object, to scan it (even in the case of reflective materials), to recreate it in real time in color in 3D and then to be used in virtual or augmented reality, or printed.</p>



<p>Realistic 3D printing is possible thanks to the development of adaptive color 3D printing. In this specific case, the technologies of the Israeli <a href="https://www.stratasys.com/" target="_blank" rel="noreferrer noopener">Stratasys</a> are used. All with a declared goal: to show ancient works without the risk of ruining the originals.</p>



<p>As I see it, well but not very well… Poetry is missing in such an object, and I don&#8217;t know how many would substitute it for the true original. The fact remains that in cases other than printing, such a machine allows significant time savings for the digitization of world cultural heritage (and in Italy, there is a lot to do).</p>



<h3 class="wp-block-heading" id="l-intelligenza-artificiale-ci-pone-all-erta">Artificial intelligence puts us on the alert</h3>



<figure class="wp-block-video alignright"><video controls src="https://www.darioriccio.com/wp-content/uploads/2022/02/trust_ai-1.mp4"></video></figure>



<p>This is what is first imagined when thinking about the future of technology in art. Nice, and at the same time a bit terrifying, <a href="https://zkm.de/en/trust" target="_blank" rel="noreferrer noopener">Trust AI</a> invites us to sit in front of her, take off the mask so that she can see better and answer her questions.</p>



<p>Her face is a hologram, she informs us of our personal data (age, sex, hair color) and progressively pirates our face until it assumes our appearance. In her speech, after having &#8220;stolen&#8221; her personality, she is keen to alert us to the data we share online; how these can be used for less than noble purposes by thieves.</p>



<p>Does artificial intelligence cause dangers? Certainly yes, but no more than many other technological innovations. And art installations like this help the public to understand how it works, in order to better defend itself.</p>



<p>The work was produced by <a href="https://www.bernd-lintermann.de/" target="_blank" rel="noreferrer noopener">Bernd Lintermann</a> and <a href="https://www.studio-hertweck.com/" target="_blank" rel="noreferrer noopener">Florian Hertweck</a> at the <a href="https://zkm.de/en/about-the-zkm/organization/hertz-lab" target="_blank" rel="noreferrer noopener">ZKM | Hertz-Lab</a>, in response to the disturbing spread of <a href="https://it.wikipedia.org/wiki/Deepfake" target="_blank" rel="noreferrer noopener">deepfakes</a>. I invite you to read the <a href="https://zkm.de/en/trust" target="_blank" rel="noreferrer noopener">long article</a> in which they describe the project; it&#8217;s definitely interesting.</p>



<h3 class="wp-block-heading" id="la-realta-aumentata-per-visitare-monumenti-storici">Augmented reality to visit historical monuments</h3>


<div class="wp-block-image">
<figure class="alignleft size-full is-resized"><img loading="lazy" decoding="async" src="https://www.darioriccio.com/wp-content/uploads/2022/02/grotta-ar1.jpg" alt="" class="wp-image-89" width="375" height="500" srcset="https://www.darioriccio.com/wp-content/uploads/2022/02/grotta-ar1.jpg 500w, https://www.darioriccio.com/wp-content/uploads/2022/02/grotta-ar1-225x300.jpg 225w" sizes="(max-width: 375px) 100vw, 375px" /></figure></div>


<p>Beautiful, fun for children, but apart from that, <strong>visiting a cave reproduced in 3D</strong>, displayed in augmented reality with an iPad in hand, <strong>I don&#8217;t think is the best way to use technology in art</strong>. I find it useless, a pure exercise in style; I do not want the creators who have indeed been exceptional.</p>



<p>Mind you, we are talking about one of the <a href="https://it.wikipedia.org/wiki/Grotte_di_Mogao" target="_blank" rel="noreferrer noopener">Mogao caves</a>, near the city of <a href="https://it.wikipedia.org/wiki/Dunhuang" target="_blank" rel="noreferrer noopener">Dunhaung</a> (China) dating back to the <a href="https://it.wikipedia.org/wiki/Dinastia_Tang" target="_blank" rel="noreferrer noopener">Tang dynasty</a>. Therefore, something very ancient endangered by the excess of tourism. Digital representations are welcome, but this is 100% the field of virtual reality: a trespassing of AR does not produce the same results.</p>


<div class="wp-block-image">
<figure class="alignright size-full is-resized"><img loading="lazy" decoding="async" src="https://www.darioriccio.com/wp-content/uploads/2022/02/grotta-ar2.jpg" alt="" class="wp-image-90" width="250" height="334" srcset="https://www.darioriccio.com/wp-content/uploads/2022/02/grotta-ar2.jpg 500w, https://www.darioriccio.com/wp-content/uploads/2022/02/grotta-ar2-225x300.jpg 225w" sizes="(max-width: 250px) 100vw, 250px" /></figure></div>


<p>Holding the big tablet in your hand is uncomfortable, to begin with. And since it&#8217;s all reconstructed in 3D, why not take advantage of VR to feel like you&#8217;re really there, and not walk into a black cube with white textures drawn on the wall?</p>



<p>Augmented reality can really do a lot to improve the quality of a tourist visit, but in the real place. You can get help from an iPad to reproduce life scenes from the past, legends, descriptive videos of a particular point… But <strong>a mere virtual visit, let&#8217;s do it right</strong>.</p>



<h3 class="wp-block-heading" id="realta-aumentata-che-ci-fa-riflettere">Augmented reality that makes us think</h3>



<p>As soon as you enter the exhibition, two works arouse curiosity. The &#8220;Reclining Pan&#8221; which we will talk about shortly, and an isolated white pedestal. This is the basis for &#8220;<a href="https://www.jeffreyshawcompendium.com/portfolio/golden-calf/" target="_blank" rel="noreferrer noopener">The Golden Calf</a>&#8221; <em>by</em> <strong>Jeffrey Shaw</strong>. A calf that takes shape only by pointing the object with an iPad which, thanks to augmented reality, reproduces it.</p>


<div class="wp-block-image">
<figure class="alignright size-large is-resized"><img loading="lazy" decoding="async" src="https://www.darioriccio.com/wp-content/uploads/2022/02/golden-calf-1024x768.jpg" alt="" class="wp-image-101" width="384" height="288" srcset="https://www.darioriccio.com/wp-content/uploads/2022/02/golden-calf-1024x768.jpg 1024w, https://www.darioriccio.com/wp-content/uploads/2022/02/golden-calf-300x225.jpg 300w, https://www.darioriccio.com/wp-content/uploads/2022/02/golden-calf-768x576.jpg 768w, https://www.darioriccio.com/wp-content/uploads/2022/02/golden-calf.jpg 1200w" sizes="(max-width: 384px) 100vw, 384px" /><figcaption class="wp-element-caption">Image from EPFL website</figcaption></figure></div>


<p>The importance of The Golden Calf is very clear thinking about the year of production: 1994. Shaw was a true pioneer in the world of augmented reality, and obviously in its original form there was not an iPad but a screen equipped with motion tracking. magnetic <em><a href="https://polhemus.com/" target="_blank" rel="noreferrer noopener">Polhemus</a></em>, one of the very few companies to offer this technology at the end of the last century.</p>



<p>The most interesting effect are the reflections on the calf. Four cameras placed on the sides of the pedestal reproduce our image on the subject to give it that extra touch of realism. And to make the work always different, to want to make the viewer himself a co-author. And the calf, elusive, is the object of our desires.</p>



<h3 class="wp-block-heading" id="stampa-3d-per-la-salvaguardia-della-scultura">3D printing for the protection of the sculpture</h3>


<div class="wp-block-image">
<figure class="alignleft size-full is-resized"><img loading="lazy" decoding="async" src="https://www.darioriccio.com/wp-content/uploads/2022/02/reclining-pan.jpg" alt="" class="wp-image-102" width="350" height="272" srcset="https://www.darioriccio.com/wp-content/uploads/2022/02/reclining-pan.jpg 700w, https://www.darioriccio.com/wp-content/uploads/2022/02/reclining-pan-300x233.jpg 300w" sizes="(max-width: 350px) 100vw, 350px" /><figcaption class="wp-element-caption">Image from artist&#8217;s website</figcaption></figure></div>


<p>The &#8220;Reclining Pan&#8221; is the modern copy of an ancient copy. The original work is attributed to the Florentine sculptor Francesco Da Sangallo, built around 1535. In turn, it was a copy of which, however, the original has been lost.</p>



<p>The contemporary artist <a href="http://oliverlaric.com/" target="_blank" rel="noreferrer noopener"><strong>Oliver Laric</strong></a> made a 3D scan of it (which you can download for free from <a href="https://threedscans.com/saint-louis-art-museum/reclining-pan/" target="_blank" rel="noreferrer noopener">Three D Scans</a>), then printing it in many different materials. The result is a new, modern, and provocative work. It calls into question the veracity of the post-original in the modern era, rejecting the postulate of the singularity of art and its private property.</p>



<p><em>Laric</em>, for his popular work, has made many works scanned in 3D available, free of charge, to anyone who wants to download them from the <a href="https://threedscans.com/" target="_blank" rel="noreferrer noopener">Three D Scans website.</a></p>



<h2 class="wp-block-heading" id="nuove-strade-per-il-futuro-della-tecnologia-nell-arte">New paths for the future of technology in art</h2>



<p>I have not described here all the works, which were over 20. I wanted to talk about those that most of all make us reflect on the changes that technology is bringing to the world of art. In addition to these, it is also worth mentioning the blockchain which here was represented by the multimedia work &#8220;<a href="http://www.ricegallery.org/eve-sussman" target="_blank" rel="noreferrer noopener">89 seconds to Alcázar</a>&#8220;, by <a href="https://it.frwiki.wiki/wiki/Eve_Sussman" target="_blank" rel="noreferrer noopener">Eve Sussman</a>. Videos whose rights have been divided into 2,304 &#8220;atoms&#8221; of 20 × 20 pixels, and sold via blockchain. And whose EPFL reproduction was full of holes, as some atom owners did not consent to free public viewing.</p>



<p>A bright future, in which the public domain will be increasingly important. But at the same time, a future in which artists will have more and more ways to experience their creations, to spread them to the whole world.</p>



<p>In which it will be easier to organize thematic art exhibitions, even in small realities, without the enormous costs of transporting works of great value.</p>



<p>In which spectators will be able to admire and study works of art from all over the world, and from all eras, without the need to travel thousands of kilometers. Which, for heaven&#8217;s sake, is beautiful. But sometimes, and for some, impossible.</p>
<p>L'articolo <a href="https://www.darioriccio.com/en/the-future-of-technology-in-art/">The future of technology in art</a> è apparso primo per <a href="https://www.darioriccio.com/en/">Future of Cinema - by Dario Riccio</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.darioriccio.com/en/the-future-of-technology-in-art/feed/</wfw:commentRss>
			<slash:comments>3</slash:comments>
		
		
			</item>
	</channel>
</rss>
