<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>artificial intelligence Archives - Future of Cinema - by Dario Riccio</title>
	<atom:link href="https://www.darioriccio.com/en/tag/artificial-intelligence/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.darioriccio.com/en/tag/artificial-intelligence/</link>
	<description>Tecnologia e Business</description>
	<lastBuildDate>Thu, 16 Nov 2023 09:07:43 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.7.1</generator>

 
	<item>
		<title>AI in Film Production: How to Produce an Innovative Low-Budget Film</title>
		<link>https://www.darioriccio.com/en/ai-in-film-production-how-to-produce-an-innovative-low-budget-film/</link>
					<comments>https://www.darioriccio.com/en/ai-in-film-production-how-to-produce-an-innovative-low-budget-film/#respond</comments>
		
		<dc:creator><![CDATA[Dario Riccio]]></dc:creator>
		<pubDate>Sun, 12 Nov 2023 12:29:56 +0000</pubDate>
				<category><![CDATA[Cinema]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[immersive cinema]]></category>
		<category><![CDATA[innovation]]></category>
		<guid isPermaLink="false">https://www.darioriccio.com/ai-in-film-production-how-to-produce-an-innovative-low-budget-film/</guid>

					<description><![CDATA[<p>In today&#8217;s article, I want to start by introducing you to a major project we are working on with the usual nExt, combining artificial intelligence and cinema. We are using AI for cinematic production in an immersive 360-degree film, initially projected in mobile domes. Leveraging all the latest technologies to be a memorable experience that [&#8230;]</p>
<p>L'articolo <a href="https://www.darioriccio.com/en/ai-in-film-production-how-to-produce-an-innovative-low-budget-film/">AI in Film Production: How to Produce an Innovative Low-Budget Film</a> è apparso primo per <a href="https://www.darioriccio.com/en/">Future of Cinema - by Dario Riccio</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>In today&#8217;s article, I want to start by introducing you to a major project we are working on with <strong><em>the usual nExt</em></strong>, combining artificial intelligence and cinema. We are using AI for cinematic production in an immersive 360-degree film, initially projected in mobile domes. Leveraging all the latest technologies to be a memorable experience that will combine real and virtual. Basically: <strong>a milestone for what will be the cinema of the future</strong>.</p>
<div class="wp-block-image">
<figure class="alignright size-medium"><img fetchpriority="high" decoding="async" width="300" height="300" src="https://www.darioriccio.com/wp-content/uploads/2023/11/dome-300x300.jpg" alt="People in an imaginary dome with projections of fish and planets in a large sky." class="wp-image-864" srcset="https://www.darioriccio.com/wp-content/uploads/2023/11/dome-300x300.jpg 300w, https://www.darioriccio.com/wp-content/uploads/2023/11/dome-150x150.jpg 150w, https://www.darioriccio.com/wp-content/uploads/2023/11/dome.jpg 512w" sizes="(max-width: 300px) 100vw, 300px" /></figure></div>
<p>It promises to leave a deep mark and represent a significant turning point in the history of cinema. For a more democratic cinema, more present among the people, uniting&#8230; What was the original cinema, and what it has failed to be for so long.</p>

<p>In this article I want to talk broadly, technically, about the basic idea. And share with you some possible areas of using artificial intelligence to get the most out of it on an, intentionally, small budget.</p>

<p>We can also consider it an update, at a much more advanced stage, of the last article written in February 2023 entitled. <em>
  <a href="https://www.darioriccio.com/en/how-to-make-low-budget-films-with-artificial-intelligence-early-stages/">How to Make Low Budget Movies with Artificial Intelligence &#8211; First Steps.</a>
</em>.  </p>

<h2 class="wp-block-heading">The revolutionary impact of Artificial Intelligence in cinema</h2>

<p>At one time, special effects and narrative techniques were the magic wands of cinema, but today artificial intelligence is playing a revolutionary card. It is an incredible help, an accomplice that opens the door to unprecedented innovation. We are riding this wave, with AI by our side lending a hand in creating compelling stories, digital characters that feel real, and music that gets right to your heart. But let us not forget the human touch, the real beating heart of each of our creations.</p>

<p>&#8220;Artificial intelligence and cinema&#8221; is not a token motto, but the beginning of a new chapter in storytelling. We are ready to prove that the future of entertainment is no longer a distant dream: it is here, and it is animated by artificial intelligence.</p>

<h2 class="wp-block-heading">Luna Dolph, Kyle and China: from virtual to real life</h2>
<div class="wp-block-image">
<figure class="alignright size-medium"><img decoding="async" width="300" height="300" src="https://www.darioriccio.com/wp-content/uploads/2023/11/robot-and-teen-300x300.jpg" alt="Drawing of a robot with a baby" class="wp-image-877" srcset="https://www.darioriccio.com/wp-content/uploads/2023/11/robot-and-teen-300x300.jpg 300w, https://www.darioriccio.com/wp-content/uploads/2023/11/robot-and-teen-150x150.jpg 150w, https://www.darioriccio.com/wp-content/uploads/2023/11/robot-and-teen.jpg 512w" sizes="(max-width: 300px) 100vw, 300px" /></figure></div>
<p>The story, scripted by <em>
  <strong>
    <a href="https://www.linkedin.com/in/gbernasconi?originalSubdomain=fr" target="_blank" rel="noreferrer noopener">Gérard Bernasconi</a>
  </strong>
</em>, starts with the story of <em>Luna Dolph</em>, <em>Kyle</em> and <em>China</em>; not mere characters, but three-dimensional avatars breathing life into a fascinating and complex virtual world. These avatars are not only protagonists of a story, but <strong>symbols of our age</strong>. Vivid representations of <strong>our progressive immersion in the virtual universe</strong>, also known as the <em>metaverse</em>. This concept, once relegated to science fiction, is rapidly taking shape and influencing diverse industries, from video games to social networking platforms, transforming the way we interact, work and have fun.</p>

<h3 class="wp-block-heading">Virtual Reality and Digital Identities</h3>
<div class="wp-block-image">
<figure class="alignleft size-medium"><img decoding="async" width="300" height="300" src="https://www.darioriccio.com/wp-content/uploads/2023/11/artificial-intelligence-and-cinema-300x300.jpg" alt="A girl and a robot together in a movie theater" class="wp-image-872" srcset="https://www.darioriccio.com/wp-content/uploads/2023/11/artificial-intelligence-and-cinema-300x300.jpg 300w, https://www.darioriccio.com/wp-content/uploads/2023/11/artificial-intelligence-and-cinema-150x150.jpg 150w, https://www.darioriccio.com/wp-content/uploads/2023/11/artificial-intelligence-and-cinema.jpg 512w" sizes="(max-width: 300px) 100vw, 300px" /></figure></div>
<p>Our goal is not only to tell a compelling story, but to invite the audience to <strong>reflect on the nature of reality in a digital age</strong>. These avatars, with their complex interactions and intricate worlds, represent the ongoing fusion of our physical and digital lives. A convergence that is redefining the very meaning of identity, community and belonging.</p>

<p>At the core of our narrative we find Luna Dolph, Kyle and China. They are three three-dimensional avatars whose lives take place in a virtual world of extraordinary beauty. While existing in the digital ether, their story is a bridge to reality, a means of reminding our viewers of the irreplaceable value of human interaction and real life. In an age when digitization has transformed our ways of connecting, our narrative aims to use technology not as a refuge, but as a springboard to rediscover and reinvigorate authentic and tangible sociality.</p>

<h3 class="wp-block-heading">Technology helps you live better</h3>

<p>As Luna, Kyle and China navigate a fascinating metaverse, their experiences and challenges resonate with a clear message: <strong>technology, however advanced, is a tool that, when used wisely, can enrich but not replace the warmth and complexity of human connections</strong>. Our goal is to bring out the realization that despite the attractiveness of digital, real life takes place off the screen. In the shared laughter, handshakes, hugs and spontaneous moments that make up the fabric of our existence.</p>

<p>With this storytelling, we aspire to inspire viewers to lift their gaze from their devices and re-immerse themselves in the real world, enriching their lives with authentic experiences. Through the exploration of virtual worlds, we want to celebrate and promote the beauty and irreplaceable importance of real life and human sociality.</p>

<p>The film will represent the first outing in the real world for the three main characters.</p>

<h2 class="wp-block-heading">How we use AI in film production</h2>

<p>We are still in the pre-production stage, so from a practical point of view I will keep you updated in the coming months. For now, we have a rough idea: we have selected a compendium of the latest artificial intelligence (AI) technologies that are both affordable and available to all.</p>

<p>The film will be in <strong>fulldome</strong>, immersive 360&#215;180-degree format. We will basically project it into domes and planetariums. This is a crucial innovation looking to the future as we push more and more toward total immersiveness. And we have a limited budget, let&#8217;s say between 10 and 20,000 euros at a glance. The film will last about 40 minutes, and will be about 30 percent in the virtual world of Luna and her friends (entirely recreated in <em>
  <a href="https://www.unrealengine.com/" target="_blank" rel="noreferrer noopener">Unreal Engine</a>
</em>), for the remaining 70% in the real world.</p>

<h3 class="wp-block-heading">Using Luma AI for the cinema of the future</h3>

<p>To begin talking about this technological arsenal I mention the following <strong><em><a href="https://lumalabs.ai">Luma AI</a></em></strong>, a pioneering solution that completely revolutionizes the generation of three-dimensional environments. <strong>Luma AI allows us to reproduce real environments within Unreal Engine, in photorealistic quality and </strong>even<strong> with an iPhone</strong>. Employing advanced technologies such as the  <a href="https://www.matthewtancik.com/nerf" target="_blank" rel="noreferrer noopener"><strong>Neural Radiance Fields</strong></a>  (NeRF) and the brand new  <strong><a href="https://arxiv.org/pdf/2308.04079.pdf" target="_blank" rel="noreferrer noopener">Gaussian Splatting</a></strong>  (the latter published just three months ago by the University of the Côte d&#8217;Azur, France), we can capture the complexity and richness of the real world in digital format, put it into Unreal Engine (including the ability to move freely within the scene) and bring to life scenes previously relegated only to large budgets.</p>
<div class="wp-block-image">
<figure class="alignright size-medium"><img loading="lazy" decoding="async" width="300" height="181" src="https://www.darioriccio.com/wp-content/uploads/2023/11/lumaai-300x181.jpg" alt="Screenshot of the Luma AI website" class="wp-image-885" srcset="https://www.darioriccio.com/wp-content/uploads/2023/11/lumaai-300x181.jpg 300w, https://www.darioriccio.com/wp-content/uploads/2023/11/lumaai-1024x619.jpg 1024w, https://www.darioriccio.com/wp-content/uploads/2023/11/lumaai-768x464.jpg 768w, https://www.darioriccio.com/wp-content/uploads/2023/11/lumaai-1536x928.jpg 1536w, https://www.darioriccio.com/wp-content/uploads/2023/11/lumaai-2048x1237.jpg 2048w" sizes="(max-width: 300px) 100vw, 300px" /></figure></div>
<p>We lower the costs associated with creating detailed settings and complex objects, as well as streamline and speed up the production process. Luma AI not only democratizes access to world-class technologies, but also gives us the tools to experiment at a speed that was previously unimaginable. And it allows us to impart an unprecedented level of vividness and depth to our scenes.</p>

<h3 class="wp-block-heading">Skybox AI for cinema &#8211; Simplified lighting on Unreal Engine</h3>
<div class="wp-block-image">
<figure class="alignleft size-medium"><img loading="lazy" decoding="async" width="300" height="168" src="https://www.darioriccio.com/wp-content/uploads/2023/11/skyboxAI-300x168.jpg" alt="Skybox AI user interface" class="wp-image-888" srcset="https://www.darioriccio.com/wp-content/uploads/2023/11/skyboxAI-300x168.jpg 300w, https://www.darioriccio.com/wp-content/uploads/2023/11/skyboxAI-1024x575.jpg 1024w, https://www.darioriccio.com/wp-content/uploads/2023/11/skyboxAI-768x431.jpg 768w, https://www.darioriccio.com/wp-content/uploads/2023/11/skyboxAI-1536x862.jpg 1536w, https://www.darioriccio.com/wp-content/uploads/2023/11/skyboxAI-2048x1150.jpg 2048w" sizes="(max-width: 300px) 100vw, 300px" /></figure></div>
<p>Another crucial tool in our repertoire is <strong>
  <a href="https://skybox.blockadelabs.com" target="_blank" rel="noreferrer noopener">Skybox AI</a>
</strong> by <em>
  <a href="https://www.blockadelabs.com" target="_blank" rel="noreferrer noopener">Blockade Labs</a>
</em>, for creating immersive <em>skyboxes</em>. Enriching virtual scenes with vital details about lighting and setting.  </p>

<p>A <a href="https://it.wikipedia.org/wiki/Skybox_(videogiochi)" target="_blank" rel="noreferrer noopener">skybox</a> in Unreal Engine not only provides visually convincing surroundings (mountains, sky, distant houses, etc&#8230;), but <strong>also affects the overall lighting of 3D assets within the scene</strong>. This is what interests us most in filmmaking: it acts as an ambient light source, reflecting its colors and hues on objects, helping to <strong>create a consistent and realistic atmosphere</strong>. For example, a skybox depicting a sunset will infuse warm orange and red hues on the scene; while a night skybox will provide a cooler, dimmer light. This process helps integrate 3D assets into the surrounding environment, making the entire visual experience more immersive and coherent.</p>

<p>Aspect that proves essential in creating 360-degree fulldome environments. Every tiny detail is critical to sustaining the illusion of a fully immersive world.</p>

<h3 class="wp-block-heading">Using Kaiber AI for our film</h3>

<p><strong><a href="https://kaiber.ai" target="_blank" rel="noreferrer noopener">Kaiber AI</a> </strong>is useful and quality, but I have a serious doubt: I don&#8217;t know if it works with fulldome video. I haven&#8217;t had time to experiment with it yet, but it will possibly help smooth out the scenes so that the assets can be merged even better. It works through AI directly on the final video files.</p>

<h3 class="wp-block-heading">Artificial intelligence and cinema: Reverie AI</h3>

<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="608" src="https://www.darioriccio.com/wp-content/uploads/2023/11/reverieai-1024x608.jpg" alt="Reverie AI website screenshot" class="wp-image-890" srcset="https://www.darioriccio.com/wp-content/uploads/2023/11/reverieai-1024x608.jpg 1024w, https://www.darioriccio.com/wp-content/uploads/2023/11/reverieai-300x178.jpg 300w, https://www.darioriccio.com/wp-content/uploads/2023/11/reverieai-768x456.jpg 768w, https://www.darioriccio.com/wp-content/uploads/2023/11/reverieai-1536x911.jpg 1536w, https://www.darioriccio.com/wp-content/uploads/2023/11/reverieai-2048x1215.jpg 2048w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>

<p>In anticipation of the launch of <a href="https://reverieai.net" target="_blank" rel="noreferrer noopener">
  <strong>Reverie AI</strong>
</a>, we are stimulated by its promises to create virtual worlds for Unreal Engine almost by &#8220;copying&#8221; a pre-existing picture. Its ability to generate scenery that faithfully mimics reality, combined with its potential in color correction of virtual scenes, opens doors to unlimited possibilities in visual storytelling.  <em>Reverie AI</em> promises to be an excellent complement to our workflow, improving visual consistency and ensuring smooth and convincing transitions between the virtual and real worlds.</p>

<h3 class="wp-block-heading">Move.ai, cheap and working mocap</h3>

<p>Another &#8220;smart&#8221; tool we plan to use for our film will be <strong>Move.ai</strong>, specifically the inexpensive service <em>
  <a href="https://www.move.ai/single-camera" target="_blank" rel="noreferrer noopener">Move One</a>
</em> (on launch offer at $15 a month, then expected to cost $30). With a simple <a href="https://apps.apple.com/us/app/move-one/id6448635527" target="_blank" rel="noreferrer noopener">iPhone app</a>, we will be able to create simple, ready-made custom animations without too many fixes or clean-ups. This reduces time and cost, allowing you not to be limited by the animations already available in services such as <em>
  <a href="https://www.mixamo.com" target="_blank" rel="noreferrer noopener">Mixamo</a>
</em>, <em>
  <a href="https://actorcore.reallusion.com" target="_blank" rel="noreferrer noopener">ActorCore</a>
</em> or the <a href="https://mocap.market/">
  <em>MoCap Market</em>
</a> by Noitom.</p>

<p>Here a very quick video test:</p>

<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="Move.Ai single camera Motion capture Test &quot;MOVE ONE&quot;" width="678" height="381" src="https://www.youtube.com/embed/mJoZURcNiL4?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
</div></figure>

<p>Clearly, they also have the more expensive professional service that allows up to 8 rooms to be used. But we do not count on using it for this first production unless there is a very real need.  </p>

<h2 class="wp-block-heading">Speech-to-speech for film dubbing&#8230; Is it possible?</h2>

<p>In our production, innovation does not stop with the creation of the digital world; in fact, we also want to take advantage of the ongoing technological revolution in traditional aspects of filmmaking, such as <strong>dubbing</strong>.</p>

<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="1024" height="1024" src="https://www.darioriccio.com/wp-content/uploads/2023/11/speech-to-speech.jpeg" alt="Image generated with MidJourney showing a man from the future talking to a robot." class="wp-image-893" srcset="https://www.darioriccio.com/wp-content/uploads/2023/11/speech-to-speech.jpeg 1024w, https://www.darioriccio.com/wp-content/uploads/2023/11/speech-to-speech-300x300.jpeg 300w, https://www.darioriccio.com/wp-content/uploads/2023/11/speech-to-speech-150x150.jpeg 150w, https://www.darioriccio.com/wp-content/uploads/2023/11/speech-to-speech-768x768.jpeg 768w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>

<p>The voice of Luna and the other 3D avatars, must be consistent (not to say identical) in all media, both in the virtual and real worlds. We face a complex challenge: <strong>maintaining the same voice across multiple platforms</strong>, including social media and especially real-time streaming, <strong>without tying ourselves to a single voice actor</strong>. This allows us to have flexibility in storytelling and to adapt to various formats without depending on the availability of a specific actor.</p>

<p><strong>The idea is to replicate the voices of some real actors and associate them with avatars</strong>  (Luna, Kyle, and China for starters), then transforming any actor&#8217;s voice, in real time, into Luna&#8217;s, to use it for both dubbing movies and social content (as much in the original language as in translations), and for the  <em><a href="https://it.wikipedia.org/wiki/Live_streaming" target="_blank" rel="noreferrer noopener">live streaming</a></em>  superimposing these entries on the animated avatar in  <a href="https://docs.unrealengine.com/5.3/en-US/recording-face-animation-on-ios-device-in-unreal-engine/" target="_blank" rel="noreferrer noopener">motion capture</a>.</p>

<h3 class="wp-block-heading">From the excellent but expensive Respeecher, to the ambiguous Voicemod</h3>

<p>We explored options such as <a href="https://www.respeecher.com" target="_blank" rel="noreferrer noopener">
  <strong>Respeecher</strong>
</a>, an advanced speech-to-speech conversion tool, but the costs for real-time are prohibitive. We&#8217;re talking about 1,000 or 2,000 a month for a few hours of use. <strong>
  <a href="https://www.voicemod.net/ai-voices/" target="_blank" rel="noreferrer noopener">Voicemod</a>
</strong> presents itself as a cheaper solution because of its <em>AI voices</em>, although there are conflicting rumors about its reliability (some even consider it to be malware or <a href="https://www.kaspersky.it/resource-center/definitions/what-is-cryptojacking" target="_blank" rel="noreferrer noopener">cryptojacking</a>&#8230;). And there remains the problem of not having the rights to the voice, which they own, which will certainly prove to be a problem in the future. I do not yet know the cost for real-time conversion of <em>
  <strong>
    <a href="https://www.resemble.ai" target="_blank" rel="noreferrer noopener">Resemble.ai</a>
  </strong>
</em>, which I have used in the past for the much cheaper text-to-speech, and of <a href="https://www.veritonevoice.com" target="_blank" rel="noreferrer noopener">
  <strong>Veritone Voice</strong>
</a>.</p>

<p>Another tool that I have not been able to test is <strong>
  <a href="https://crimsontech.jp/apps/voidol3/?lang=en" target="_blank" rel="noreferrer noopener">Voidol 3</a>
</strong>, at a cost of about $300. I couldn&#8217;t find a demo version, but I admit I didn&#8217;t even try that hard to request one. It is one of several Oriental software created because of the typical Japanese passion for the anime world. But actually adaptable to our purpose, as we will see in a moment with another Japanese.</p>

<h3 class="wp-block-heading">MetaVoice Live, Mangio RVC Fork and W-Okada Voice Changer, free and open-source</h3>

<p><strong>The final solutions</strong>, after much research, are. <strong>
  <a href="https://github.com/metavoicexyz/MetaVoiceLive/releases" target="_blank" rel="noreferrer noopener">MetaVoice Live</a>
</strong> and the <a href="https://github.com/w-okada/voice-changer" target="_blank" rel="noreferrer noopener">
  <strong>Voice</strong>
</a><a href="https://github.com/w-okada/voice-changer"><strong> Changer</strong> from W-Okada</a>. Both <strong>open source</strong>, which reassures us that we can base the &#8220;future life&#8221; of the characters on these services, and both <strong>free</strong>. I cannot fail to mention the YouTube channel <a href="https://www.youtube.com/@ai-tools-search" target="_blank" rel="noreferrer noopener">
  <em>AI Tools Search</em>
</a> which has been most useful to me. Among the most interesting videos in this area, certainly this one:</p>

<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="Make UNLIMITED AI Voice Conversions, Training, &amp; Covers for FREE: RVCv2 Installation &amp; Tutorial" width="678" height="381" src="https://www.youtube.com/embed/ixB9oalT3cQ?start=475&#038;feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
</div></figure>

<p>I particularly like <em>MetaVoice</em>: it is under heated development and has a cloud version for non-real-time conversions that provides greater quality and flexibility. This one is not free, but the cost between $10 and $25 per month can all in all be dealt with.</p>

<p>In contrast, <em>W-Okada</em> &#8216;s <em>Voice Changer</em> has many independent developers developing solutions compatible with it, such as GitHub user <a href="https://github.com/Mangio621" target="_blank" rel="noreferrer noopener">
  <em>Mangio621</em>
</a> who created the <strong>
  <a href="https://github.com/Mangio621/Mangio-RVC-Fork/releases/tag/v23.7.0">Mangio RVC Fork</a>
</strong>, a software with Web interface that can transform the voice of a real actor into that of the chosen voice model. Many of these models, especially famous people, are already available on sites such as <a href="http://voice-models.com" target="_blank" rel="noreferrer noopener">Voice-Models.com</a> in &#8220;.pth&#8221; format. A classic format used in machine learning), but the best thing is that with the same <em>Mangio RVC</em> we can perform training, or training, of a custom voice. All locally thanks to a good video card. And therefore clearly always available, and for free.</p>

<p>I will do a specific article on this shortly, however. So you will follow me as I do some interesting tests.</p>

<h2 class="wp-block-heading">Generating video with artificial intelligence</h2>

<p>One use of AI in film production may be text-to-video, or video-to-video. But how useful can generating videos with artificial intelligence be? I mean right from scratch, describing to the AI in text (or with a very simplified video reference) what you want to achieve. Somewhat the future, to date tools like <a href="https://research.runwayml.com/gen1" target="_blank" rel="noreferrer noopener">
  <strong>Runway Gen-1</strong>
</a>, <a href="https://www.genmo.ai" target="_blank" rel="noreferrer noopener">
  <strong>Genmo AI</strong>
</a>, o <a href="https://moonvalley.ai">
  <strong>Moonvalley AI</strong>
</a> are little more than experiments. Useful at some junctures, but far from the quality and realism needed for a film production.</p>

<p>Evidently, we will have to work for a few more years to get our films <img src="https://s.w.org/images/core/emoji/15.0.3/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>

<h2 class="wp-block-heading">Canon EOS R5C and Dual Fisheye lens for 3D fulldome video</h2>

<p>What about live action filming? In our journey, we are trying our hand at a very exciting combination of equipment. We always keep the goal in mind: <strong>little expense, much return</strong>. I had originally planned to shoot everything in full 360 degrees, also to make it already compatible in case of future virtual reality porting. But the cost became prohibitive, both for the camera (a professional camera like the <em>
  <a href="https://www.insta360.com/it/product/insta360-titan">Insta 360 Titan</a>
</em>, which by the way has not been updated for years, costs more than 17,000 euros), and for the difficulties of starting from the very beginning with such a wide field.  </p>
<div class="wp-block-image">
<figure class="alignright size-medium"><img loading="lazy" decoding="async" width="277" height="300" src="https://www.darioriccio.com/wp-content/uploads/2023/11/canondualfisheye52-277x300.jpg" alt="" class="wp-image-897" srcset="https://www.darioriccio.com/wp-content/uploads/2023/11/canondualfisheye52-277x300.jpg 277w, https://www.darioriccio.com/wp-content/uploads/2023/11/canondualfisheye52-768x831.jpg 768w, https://www.darioriccio.com/wp-content/uploads/2023/11/canondualfisheye52.jpg 906w" sizes="(max-width: 277px) 100vw, 277px" /></figure></div>
<p>So the idea is to produce the right video for the dome, in <strong>360&#215;180 degrees</strong> (basically half the sphere). And the lens <em>
  <a href="https://www.canon.it/lenses/rf-5-2mm-f2-8l-dual-fisheye-lens/" target="_blank" rel="noreferrer noopener">Canon RF 5.2mm F2.8 L Dual Fisheye</a>
</em>, paired with the <em>
  <a href="https://www.canon.it/cameras/eos-r5c/" target="_blank" rel="noreferrer noopener">Canon EOS R5C</a>
</em>, should prove to be a winning choice. This setup not only provides immersive images, but also allows us to experiment with stereoscopic shots, which add an extra level of depth and realism.</p>

<p>The Canon EOS R5C camera, with its dynamic range between 11 and 12 stops and 8K sensor, offers excellent value for money. This is an important consideration for us, as we are trying to maintain a balanced budget without sacrificing quality. We will also consider whether to rent it &#8230; From <a href="https://www.adcom.it" target="_blank" rel="noreferrer noopener">
  <em>Adcom</em>
</a>, lens and camera are offered at about 200 euros daily.</p>

<p>To be honest, we can&#8217;t fully exploit the potential of stereoscopy yet because of projection, but it is definitely something we would like to explore in the future. It is one of those things that looks really cool and could add a special touch to our project. Have you ever projected stereoscopic fulldomes or do you have any suggestions on how we could integrate it into our work? I would be happy to hear your thoughts and ideas.</p>

<p>And if it will be lacking in anything compared to more emblazoned rooms&#8230;. Again, we count on leveraging AI for video quality improvement. But research, in the field, still needs to be done.</p>

<h2 class="wp-block-heading">Conclusions</h2>

<p>In the end, our choice of digital tools and equipment reflects a desire not to compromise on quality while keeping an eye on the budget.</p>

<p>In short, we are creating something beyond traditional cinema. Thanks to artificial intelligence and cutting-edge technology, the &#8220;cinema of the future&#8221; is no longer a dream. It is real, we want to see it happen. And with the genius of <strong>
  <em>
    <a href="https://www.linkedin.com/in/gbernasconi" target="_blank" rel="noreferrer noopener">Gérard Bernasconi</a>
  </em>
</strong> to the screenplay (who also gave us a great technical contribution), the precision of <strong>
  <a href="https://www.linkedin.com/in/michela-sette-613009198/" target="_blank" rel="noreferrer noopener">
    <em>Michela Sette</em>
  </a>
</strong> in the role of VFX Supervisor and the creativity of <strong>
  <em>
    <a href="https://www.linkedin.com/in/michele-pelosio-4a455b112/" target="_blank" rel="noreferrer noopener">Michele Pelosio</a>
  </em>
</strong> as director, we are forging a revolutionary cinematic experience. Get ready, because we are about to take you to a world where cinema and reality merge into a transcendental experience <img src="https://s.w.org/images/core/emoji/15.0.3/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p>L'articolo <a href="https://www.darioriccio.com/en/ai-in-film-production-how-to-produce-an-innovative-low-budget-film/">AI in Film Production: How to Produce an Innovative Low-Budget Film</a> è apparso primo per <a href="https://www.darioriccio.com/en/">Future of Cinema - by Dario Riccio</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.darioriccio.com/en/ai-in-film-production-how-to-produce-an-innovative-low-budget-film/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>How to make low-budget films with Artificial Intelligence &#8211; Early stages</title>
		<link>https://www.darioriccio.com/en/how-to-make-low-budget-films-with-artificial-intelligence-early-stages/</link>
					<comments>https://www.darioriccio.com/en/how-to-make-low-budget-films-with-artificial-intelligence-early-stages/#comments</comments>
		
		<dc:creator><![CDATA[Dario Riccio]]></dc:creator>
		<pubDate>Thu, 16 Feb 2023 17:13:23 +0000</pubDate>
				<category><![CDATA[Cinema]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[cinema]]></category>
		<guid isPermaLink="false">https://www.darioriccio.com/how-to-make-low-budget-films-with-artificial-intelligence-early-stages/</guid>

					<description><![CDATA[<p>Discover how to use the latest artificial intelligence tools to write great low-budget screenplays. Get tips, suggestions, and strategies from experts to make your next film a success.</p>
<p>L'articolo <a href="https://www.darioriccio.com/en/how-to-make-low-budget-films-with-artificial-intelligence-early-stages/">How to make low-budget films with Artificial Intelligence &#8211; Early stages</a> è apparso primo per <a href="https://www.darioriccio.com/en/">Future of Cinema - by Dario Riccio</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>I start this article (which I will divide into multiple parts) on the day of the Immaculate Conception: December 8th. I want to talk about the current state of artificial intelligence to assist with low-budget filmmaking. It will require some time, especially to try out various services, as I don&#8217;t want to make the usual sterile list. Also, because it will help us produce content for the films we will make in the near future.</p>

<p>I got the idea, albeit unintentionally, from <a href="https://www.linkedin.com/in/nicolasperrier/" target="_blank" rel="noreferrer noopener">Nicolas Perrier</a> from the University of Lyon in France, with one of his <a href="https://www.linkedin.com/feed/update/urn:li:activity:7006515451774001152/" target="_blank" rel="noreferrer noopener">posts on LinkedIn</a>. Perrier is a skilled expert in innovation in augmented and virtual reality, and the post in question is about <a href="https://plask.ai" target="_blank" rel="noreferrer noopener">Plask</a>; one of the many tools for creating 3D animations starting from a simple video. In practice, it&#8217;s Motion Capture without expensive and complex equipment. This technology may be of particular interest to us to produce animated videos at a fraction of the current cost, and even with actors remotely.</p>

<figure class="wp-block-video"><video controls="" src="https://www.darioriccio.com/wp-content/uploads/2022/12/Hero.mp4"></video><figcaption class="wp-element-caption">Plask example video</figcaption></figure>

<p>I&#8217;m also getting additional help, not just from Nicolas himself with his countless posts, but also from the website <a href="https://www.futurepedia.io" target="_blank" rel="noreferrer noopener">Futurepedia.io</a>. It&#8217;s an &#8220;AI wiki&#8221;, featuring a selection of many tools currently available to the general public.</p>

<p>Let&#8217;s analyze some of these tools, specifically the ones that are useful in filmmaking. Both for writing and for technical production of videos, as well as for voices. We&#8217;ll evaluate the quality of the results, conduct experiments, and learn about their costs.</p>

<p>To better understand how to use them in our low-budget films, I decided to create a short film (with very low expectations, just for technical experimentation) using them as much as possible.</p>

<h2 class="wp-block-heading">Making videos with artificial intelligence.</h2>

<p>Let&#8217;s divide the services into three main categories: <em>writing</em>, <em>video</em>, and <em>audio</em>. Starting with the writing, having to have the idea first.</p>

<h2 class="wp-block-heading">Film writing with artificial intelligence</h2>

<p>We need a story. Created by an AI? Let&#8217;s see, writing tools are not lacking. And if you don&#8217;t agree, you can always argue in the comments.</p>

<h2 class="wp-block-heading">How does GPT-3 work?</h2>

<p>Most public AI writing services are currently based on <strong>GPT-3</strong>, which has 175 billion machine learning parameters. The alternatives are actually many: <strong><a href="https://huggingface.co/bigscience/bloom" target="_blank" rel="noreferrer noopener">BigScience Bloom</a></strong>, a large-scale language that has recently been launched (<em>with the advantage of being open source</em>), or the German <em>Aleph Alpha</em> with its <strong>Luminous</strong> (with 200 billion parameters).</p>

<h3 class="wp-block-heading">What are the parameters of an artificial intelligence?</h3>

<p>Imagine having a task that requires predicting whether an image contains a cat or not. A machine learning model could be trained on many images labeled as &#8220;cat&#8221; or &#8220;not cat&#8221; to learn to recognize the distinctive features of cats.</p>

<p>To do this, the model uses a neural network, which is a set of interconnected nodes. Each node represents a computation performed on the input data. <strong>The parameters, or &#8220;weights,&#8221; are the values assigned to each node that affect the strength of the connection between nodes</strong>.</p>

<p>When the model sees a new image, these weights are used to perform a series of calculations that ultimately produce a prediction, such as &#8220;cat&#8221; or &#8220;not cat&#8221;. The number of weights used in the model is the number of parameters.</p>

<p>In summary, parameters are like &#8220;ajuste values&#8221; that influence how a machine learning model processes data to make a prediction, and are modified during training to improve the model&#8217;s performance.</p>

<p>A model with many parameters will have more opportunities to adjust its calculations to fit the training data, but it may also be more prone to <em>overfitting</em>, meaning an excessive adherence to the training data and a poor ability to generalize.</p>

<h3 class="wp-block-heading">Other competitors of ChatGPT.</h3>

<p>Other competitors are <strong>Nvidia</strong> with the <em><a href="https://developer.nvidia.com/megatron-turing-natural-language-generation" target="_blank" rel="noreferrer noopener">Megatron NGL</a></em> (huge, but not available to the public ), <em>Google</em> with their <strong><a href="https://ai.googleblog.com/2021/12/more-efficient-in-context-learning-with.html" target="_blank" rel="noreferrer noopener">GLaM</a></strong>, <strong><a href="https://blog.google/technology/ai/lamda/" target="_blank" rel="noreferrer noopener">LaMDA</a></strong> and <strong><a href="https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html" target="_blank" rel="noreferrer noopener">PaLM</a></strong> (yes, they don&#8217;t leave anything out), <strong><a href="https://ai.facebook.com/blog/democratizing-access-to-large-scale-language-models-with-opt-175b/" target="_blank" rel="noreferrer noopener">OPT</a></strong> by <em>Meta</em>, <strong><a href="https://github.com/amazon-science/alexa-teacher-models" target="_blank" rel="noreferrer noopener">AlexaTM</a></strong> by <em>Amazon</em> or various Chinese ones like <em>Alibaba</em> and its <strong><a href="https://dl.acm.org/doi/abs/10.1145/3503221.3508417" target="_blank" rel="noreferrer noopener">BaGuaLu</a></strong>.</p>

<p>To conclude, I mention <strong><a href="https://6b.eleuther.ai" target="_blank" rel="noreferrer noopener">GPT-J</a></strong>, another open source alternative to GPT-3.</p>

<p>Let&#8217;s quickly understand how GPT-3 works, using a rephrased response from the AI itself:</p>

<p><em>GPT-3 (Generative Pretrained Transformer 3) is a state-of-the-art language model developed by OpenAI. Utilizes a deep learning technique called </em>&#8220;<em>transformers</em>&#8220;<em>to generate human-like text based on a given input.</em></p>

<p><em>Here&#8217;s how GPT-3 works:</em></p>

<ol class="wp-block-list">
<li><em>The model is trained on a large set of texts generated by humans, such as books, articles, and websites. This training process allows the model to learn patterns and structures of natural language.</em></li>



<li><em>When it receives an input, the model processes it using its layers of neural networks. When it receives an input, the model processes it using its layers of neural networks. Each layer performs a different task, such as analyzing the syntax, semantics, and context of the input.</em></li>



<li><em>The model then generates a response based on the input and its understanding of natural language. It does so by predicting the next words in the sentence, using the patterns and structures learned during training.</em></li>



<li><em>The model repeats this process several times, generating multiple responses to the input. These responses are then ranked and evaluated by the model, and the highest-scoring response is selected as the final output.</em></li>
</ol>

<p><em>GPT-3 is known for its ability to generate text that is often difficult to distinguish from text written by a human being. It is used in various applications, such as language translation, text summarization, and text generation.</em></p>

<h3 class="wp-block-heading">ChatGPT can tell stories for screenplay.</h3>

<p>Let&#8217;s start directly with OpenAI&#8217;s product: <strong>ChatGPT</strong>. It is perhaps the tool of the moment; the first one to have made a leap towards the general public. It&#8217;s almost concerning how accurate and &#8220;real&#8221; it can be in conversations, but it has also been discussed too much and everywhere in the last few weeks.</p>

<p>I signed up for the OpenAI website and opened the chat to try talking to GPT-3 (I can&#8217;t imagine what GPT-4 will do next) by asking it to write me a story with some dialogues for a short digital graphic film. And well, it simply made it for me.</p>

<h4 class="wp-block-heading">The story written by ChatGPT</h4>

<p>This is the story as written by ChatGPT, with no modifications, translated into Italian (you can read the original in the English version of this article).</p>
<div class="lightweight-accordion"><details><summary class="lightweight-accordion-title"><span>Title: The Lost City (click here to open)</span></summary><div class="lightweight-accordion-body">

<p>As the sun rose on the horizon, a small group of adventurers set out to find the lost city of gold. For months, they had been searching, following old maps and ancient legends in the hopes of striking it rich.</p>



<p>&#8220;Are you sure this is the right way?&#8221; asked <strong>Maria</strong>, the leader of the group. &#8220;I don&#8217;t see any sign of the city on the horizon&#8221;.</p>



<p>&#8220;Trust me, Maria,&#8221; replied <strong>Jake</strong>, the group&#8217;s ancient history expert. &#8220;The city is hidden in the depths of the jungle and it won&#8217;t be easy to find. But if we stick to the plan, we will find it&#8221;.</p>



<p>The group ventured into the dense jungle, hacking through the vines and undergrowth with their machetes. After several hours of grueling travel, they reached the edge of a clearing.</p>



<p>&#8220;Look!&#8221; <strong>Sarah</strong>, the youngest member of the group, shouted. &#8220;Down there! I see something shining in the sunlight!&#8221;.</p>



<p>The group rushed to the spot where Sarah had seen the glimmer, and sure enough, they found a small golden statue buried in the ground.</p>



<p>&#8220;This must be it!&#8221; exclaimed <strong>Maria</strong>. &#8220;We&#8217;ve found the lost city of gold!&#8221;.</p>



<p>The group started to dig, uncovering more and more treasures. They found gold coins, precious jewels, and ancient artifacts of great value.</p>



<p>&#8220;We&#8217;re rich!&#8221; exclaimed <strong>Jake</strong>, laughing with excitement. &#8220;We&#8217;ve found a treasure beyond our wildest dreams!&#8221;</p>



<p>But as they continued to dig, the ground beneath their feet began to rumble and shake. Suddenly, a huge stone door opened, revealing a hidden entrance to the lost city.</p>



<p>&#8220;Quick, everyone inside!&#8221; shouted <strong>Maria</strong>, leading the group into the dark tunnel. &#8220;We must find the source of the treasure before it is too late!&#8221;</p>



<p>The group went into the tunnel, lit by flashlights. They reached a large open chamber filled with treasures.</p>



<p>&#8220;Here we go!&#8221; shouted <strong>Sarah</strong>. &#8220;The Lost City of Gold!&#8221;</p>



<p>But as they explored the chamber, they heard a loud rumble. Suddenly, the ground beneath their feet gave way and they fell into a deep pit.</p>



<p>&#8220;Help!&#8221; cried <strong>Maria</strong>, as they fell into darkness. &#8220;We are trapped!&#8221;</p>



<p>But just when they thought all was lost, they heard a faint voice calling to them.</p>



<p>&#8220;Don&#8217;t worry, adventurers,&#8221; said <strong>the voice</strong>. &#8220;I am the guardian of the lost city and I will help you escape.&#8221;</p>



<p>With the help of the watchman, the group managed to get out of the pit and back to the surface. They came out of the jungle, triumphant and rich beyond expectation.</p>



<p>&#8220;We did it!&#8221; cried <strong>Jake</strong>, laughing with joy. &#8220;We have found the lost city of gold!&#8221; </p>

</div></details></div>
<p>Let us consider this first story. Of course, he didn&#8217;t make it easy for me&#8230;. He could create a story in the middle of a random city; in contrast, the jungle will be more complex to recreate. But I have to be in it, that&#8217;s the game.</p>

<h3 class="wp-block-heading">The same story, in a short version</h3>

<p>Oh, I also asked for a shorter version. He gave me a summary. Then I specified that I wanted a short version but with dialogue, and he simply did it. I am reporting it to you for completeness:</p>
<div class="lightweight-accordion"><details><summary class="lightweight-accordion-title"><span>The Lost City (short version)</span></summary><div class="lightweight-accordion-body">

<p>As the group of adventurers searched for the lost city of gold, they came across a hidden entrance in the jungle.</p>



<p>&#8220;Look, over there!&#8221; said <strong>Sarah</strong>, pointing to the entrance.</p>



<p>The group rushed to the entrance and explored the city, finding treasure beyond their wildest dreams.</p>



<p>&#8220;We are rich!&#8221; shouted <strong>Jake</strong>.</p>



<p>But as they continued to search, the ground beneath them gave way and they fell into a pit.</p>



<p>&#8220;Help!&#8221; cried <strong>Maria</strong>.</p>



<p>A city watchman helped them escape and they emerged victorious and wealthy.</p>



<p>&#8220;We made it!&#8221; cried <strong>Jake</strong>.</p>

</div></details></div>
<h3 class="wp-block-heading">ChatGPT automatically writes the script</h3>

<p>I was considering collecting it into a screenplay, when I thought, &#8220;Why not ask him?&#8221; Said and done. He rewrote the story for me already ready for film:</p>
<div class="lightweight-accordion"><details><summary class="lightweight-accordion-title"><span>Screenplay for &#8220;The Lost City&#8221;</span></summary><div class="lightweight-accordion-body">

<p>INT. JUNGLE &#8211; DAY</p>



<p>The sun rises on the horizon as a group of adventurers, MARIA, JAKE and SARAH, set out to find the lost city of gold.</p>



<p>MARIA<br/>Are you sure this is the right way? Non vedo alcun segno della città all&#8217;orizzonte.</p>



<p>JAKE<br/>Trust me, Maria. La città è nascosta nel profondo della giungla e non sarà facile trovarla. But if we stick to the plan, we will find it.</p>



<p>The group goes into the dense jungle, cutting through the lianas and undergrowth with their machetes.</p>



<p>SARAH<br/>Look! There! I see something shimmering in the sunlight!</p>



<p>The group rushes to the spot where Sarah saw the glitter and finds a small golden statue buried in the earth.</p>



<p>MARIA<br/>This must be it! We have found the lost city of gold!</p>



<p>The group starts digging, discovering more and more treasures.</p>



<p>JAKE<br/>We are rich! We have found a treasure beyond our wildest dreams!</p>



<p>But as they continue to dig, the ground beneath their feet begins to rumble and shake. Suddenly, a huge stone door opens, revealing a hidden entrance to the lost city.</p>



<p>MARIA<br/>Quick, everyone inside! We must find the source of the treasure before it is too late!</p>



<p>The group enters the tunnel, lit by flashlights. They reach a large open chamber filled with treasures.</p>



<p>SARAH<br/>That&#8217;s it! The lost city of gold!</p>



<p>But as they explore the chamber, they hear a loud rumble. Suddenly, the ground beneath their feet gives way and they fall into a deep pit.</p>



<p>MARIA<br/>Help! We are in a trap!</p>



<p>But just when they think all is lost, they hear a faint voice calling to them.</p>



<p>GUARD (off-camera)<br/>Don&#8217;t worry, adventurers. I am the guardian of the lost city and I will help you escape.</p>



<p>With the help of the watchman, the group manages to get out of the pit and back to the surface. They re-emerge from the jungle, triumphant and rich beyond expectation.</p>



<p>JAKE<br/>We made it! We have found the lost city of gold!</p>

</div></details></div>
<p>Apart from <em>Inside Jungle</em>, it is otherwise a good starting point. But some form errors jumped out at <em>Andrea Rocci</em>, an &#8220;human&#8221; screenwriter, albeit without an in-depth analysis (we were chatting over a beer).</p>

<p>In fact, the script speaks in pictures, and phrases such as &#8220;discovering more and more treasures,&#8221; or &#8220;rich beyond expectation&#8221; are not at all clear. Which treasures? Statues, coins, anything else? And, what do you mean by rich? Are they filled with gold on them? Are they dressed flamboyantly? Everyone with the latest iPhone and the keys to a Ferrari?</p>

<p>Not to mention the lack of descriptions of environments. The jungle itself is left to the fullest imagination of the director, or set designers (or 3D artists, whatever).</p>

<p>However, we must make a virtue of necessity; we will leave any choice at the discretion of the director (if he existed, at least&#8230;). We will try to find a good one on <a href="https://www.character.ai" target="_blank" rel="noreferrer noopener"><strong>character.ai</strong></a>, perhaps. In fact, try it out and talk to artificial &#8220;characters.&#8221; Even <em>Albert Einstein</em> is there!</p>

<h4 class="wp-block-heading">Prices</h4>

<p>Here it&#8217;s simple: <strong>it doesn&#8217;t cost anything</strong> basically. A $20/month version is starting to be marketed in some countries, which removes some limitations (mainly due to the computing power needed to handle the millions of requests coming into OpenAI each day).</p>

<h2 class="wp-block-heading">Alternatives to ChatGPT</h2>

<p>At present GPT-3 is hard to beat&#8230; While waiting for the most emblazoned candidates to come out (<em><a href="https://blog.google/technology/ai/bard-google-ai-search-updates/" target="_blank" rel="noreferrer noopener">Google Bard</a></em> soon), I asked ChatGPT itself about its competitors. He pointed me to <em>ScriptBuddy</em>, <em>WriterDuet</em> and <em>AI Screenwriter</em> to begin with. Asking for more, Plotbot, Amazon Storywriter, and InkTip Script Listing. Okay, I thought that was enough&#8230; Except that the answer is actually a partial lie. Here we see the current limitations of this artificial intelligence, in part (but not only) due to the fact that the data it has is up to 2021.</p>

<p>Meanwhile, <em>ScriptBuddy</em>, originally based on GPT-2, has not been maintained for more than 3 years; we exclude it. <em>WriterDuet</em>, on the other hand, is most useful in script arrangement. But he doesn&#8217;t write anything himself&#8230; <em>AI Screenwriter</em> I don&#8217;t think even exists, at least dear old Google doesn&#8217;t give me any results.</p>

<p>As for Plotbot, or rather <em><a href="https://github.com/rasbot/plot-bot" target="_blank" rel="noreferrer noopener">Plot-bot</a></em>, it actually generates movie plots by exploiting the GPT-2 engine. But, like ScriptBuddy, it has not been maintained for years, and to optimize time, I rule it out. The same is true for <em><a href="https://en.wikipedia.org/wiki/Amazon_Storywriter" target="_blank" rel="noreferrer noopener">Amazon StoryWriter</a></em>, while InkTip Script Listing is not for generating text.</p>

<h3 class="wp-block-heading">Dramatron</h3>

<p><a href="https://deepmind.github.io/dramatron/" target="_blank" rel="noreferrer noopener"><strong>Dramatron</strong></a> is a new alternative, branded <em>Deepmind</em>, that promises exactly what we are looking for: writing screenplays for film and theater. Unfortunately, it was only released a couple of days around December 9, and by the time I got to try it, it was too late.</p>

<p>It remains an interesting project to keep in mind when it is opened. For the time being, <em><a href="https://www.youtube.com/@georgemike-com" target="_blank" rel="noreferrer noopener">George Mike</a></em>, author of this YouTube video, does not like it.</p>

<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-4-3 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="DeepMind&#039;s Dramatron is very not good. [Demo]" width="678" height="509" src="https://www.youtube.com/embed/A1qGOfbGQcA?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
</div><figcaption class="wp-element-caption">Dramatron test</figcaption></figure>

<p>His comment? &#8220;<em>&#8230;overall, it seemed insipid to me. And it didn&#8217;t give me much more than the dialogue that you find in all the movies. He clearly grasped the genre, but did not explore it in an interesting way. It did not advance the narrative</em>&#8220;.</p>

<h3 class="wp-block-heading"><strong>Rytr</strong></h3>

<p>Through <em>Google</em> and <em>Futurepedia</em> I then came to <a href="https://rytr.me" target="_blank" rel="noreferrer noopener"><em>Rytr</em></a>. A nice tool. Account and login, and I find myself in the work area. He basically asks for language, style and use case of the created text, and he acts. Since there was clearly no &#8220;film script,&#8221; I tried meanwhile with a tool called a &#8220;magic wand.&#8221;</p>

<p>Same request made to ChatGPT, however, the response was almost hilarious:</p>
<div class="lightweight-accordion"><details><summary class="lightweight-accordion-title"><span>Story for a short film</span></summary><div class="lightweight-accordion-body">

<p>In the name of Allah, the Compassionate, the Merciful</p>



<p>I believe</p>



<p>I believe in Allah, the One God</p>



<p>I believe in prophets and messengers</p>



<p>I believe in angels and scripture</p>



<p>I believe in the books of revelation</p>



<p>I believe in life after death and that on judgment day justice will be done.</p>



<ul class="wp-block-list">
<li>Sorry, what? You are saying &#8220;I believe&#8221; everything.</li>
</ul>



<p>I don&#8217;t have time for that.</p>



<ul class="wp-block-list">
<li>I&#8217;m sorry, I don&#8217;t follow you.</li>



<li>Why are you apologizing?</li>



<li>What?</li>
</ul>

</div></details></div>
<p>Let&#8217;s try something different and more specific&#8230; I asked &#8220;I need a script of about 2000 words, with dialogue, to make a CG short film. The title is: the lost city&#8221;, and set as a use case &#8220;story plot.&#8221; Prompt based on the previous ChatGPT output. But we are still a long way off, although we have a basic storyline:</p>
<div class="lightweight-accordion"><details><summary class="lightweight-accordion-title"><span>The lost city</span></summary><div class="lightweight-accordion-body">

<p>In the near future, a crew of archaeologists and scientists on an exploratory mission to an unknown planet are captured and imprisoned by an ancient alien civilization.</p>



<p>The team is somewhat lost because of the language barrier and interactions with the aliens. They are kept in captivity for decades, exploring the ruins and discovering all the secrets of civilization.</p>



<p>They manage to return home, at the end. Scientists have learned many new things and are bringing them back to Earth to share with humans. However, they took so long that by now not only has the Earth changed, but there is no more time for these discoveries.</p>

</div></details></div>
<p>Same request, but with the magic command as the use case:</p>
<div class="lightweight-accordion"><details><summary class="lightweight-accordion-title"><span>The lost city</span></summary><div class="lightweight-accordion-body">

<p>INT. LIVING ROOM &#8211; AFTERNOON</p>



<p>The room is well furnished and looks very comfortable. There is a door that leads to another room.</p>



<p>JOE: Good morning, John.</p>



<p>Morning, Joe. How are you this morning?</p>



<p>JOE: I&#8217;m fine, thank you for asking. Do you?</p>



<p>JOHN: I&#8217;m just fine! What&#8217;s on the agenda for today?</p>



<p>JOE: I&#8217;m going to catch up on my reading and then probably take a walk around the neighborhood. Do you have any programs</p>

</div></details></div>
<p>Okay, a useful tool for short texts but still far from total autonomy. We come to prices, although in our case I maintain ChatGPT&#8217;s text.</p>

<h4 class="wp-block-heading">Prices</h4>

<p>Rytr has a completely <strong>free</strong> version, limited to 10000 characters per month. Otherwise it costs <strong>$9 per month</strong> for 100,000 characters, or <strong>$29 per month</strong> for unlimited use.</p>

<h3 class="wp-block-heading">Shortly AI</h3>

<p>Another tool is <strong><a href="https://www.shortlyai.com" target="_blank" rel="noreferrer noopener">Shortly AI</a></strong>, primarily designed to combat &#8220;writer&#8217;s block&#8221;. Write your text or a sentence, and he continues.</p>

<p>Since I could not ask him to do the story for me, I used the first sentence of the story generated by ChatGPT: <em>Quando il sole sorse all&#8217;orizzonte, un piccolo gruppo di avventurieri si mise in viaggio per trovare la città d&#8217;oro perduta.</em> Although based on the same language model, the continuation was clearly different. </p>

<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="682" src="https://www.darioriccio.com/wp-content/uploads/2022/12/Screenshot-2022-12-10-alle-10.00.55-1024x682.jpg" alt="" class="wp-image-789" srcset="https://www.darioriccio.com/wp-content/uploads/2022/12/Screenshot-2022-12-10-alle-10.00.55-1024x682.jpg 1024w, https://www.darioriccio.com/wp-content/uploads/2022/12/Screenshot-2022-12-10-alle-10.00.55-300x200.jpg 300w, https://www.darioriccio.com/wp-content/uploads/2022/12/Screenshot-2022-12-10-alle-10.00.55-768x512.jpg 768w, https://www.darioriccio.com/wp-content/uploads/2022/12/Screenshot-2022-12-10-alle-10.00.55-1536x1023.jpg 1536w, https://www.darioriccio.com/wp-content/uploads/2022/12/Screenshot-2022-12-10-alle-10.00.55-2048x1364.jpg 2048w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>

<p>This is the final text:</p>
<div class="lightweight-accordion"><details><summary class="lightweight-accordion-title"><span>The lost city</span></summary><div class="lightweight-accordion-body">

<p>As the sun rose on the horizon, a small group of adventurers set out to find the lost city of gold. It was said that there were more than a thousand tons of gold in the lost city, but like all legends, the story of the treasure is told only to those who are worthy.</p>



<p>They had been on the road for two weeks and hoped to reach their destination by the end of the week. So far the road had been smooth, but when they approached a hill, there was a small construction on the road, which made the wagons slow down. The caravan had two large wagons carrying gold, while the rest were on horseback. The rest of the group was not much concerned about the delay, apart from the leader. To contain large amounts of gold, silver, and jewelry dating back to the ancient Aztecs. A lost city that was protected by untold numbers of demons and ghosts. It was said that not one man or beast who entered the cursed city returned.</p>



<p>The small convoy headed to the place indicated by the employer. They went over the details of the contract one last time. The six adventurers had divided into three groups of two. Each group had positioned itself in a location. The city was hidden in the depths of a mountain range at the center of which was a well-protected fortress left by the Aztec empire. The well was said to be a gateway to the lost city. Inside the fortress was to be a statue of an angel. It was a special stone tablet with an incantation that would concentrate the power of the jewel on the eighth son of an Aztec king.</p>



<p>Upon hearing of the strange fortress, the Aztec leader marched his army to attack it, but mysteriously one of the generals and his army were taken away to another kingdom and disappeared. The king then ordered the retreat. This was the beginning of problems. As they were leaving, they were attacked by large monsters that were thought to have been killed, but then were seen on top of the fortress. Battles continued throughout the following week, with the Aztecs defeated.</p>



<p>At the end, the king ordered the remaining troops to return to the city, saying that the previous week&#8217;s battles had awakened some ancient demons responsible for the disappearance of not only his son but also some of his generals and the remaining troops. The king had outlawed all kinds of magic and spells in his city, and everyone was to be executed. One young man was helped by his mother, also a sorceress, once she was discovered.</p>

</div></details></div>
<p>Ultimately, a good story but with little meaning. The AI took bits of text from here and there, but without connecting them in an &#8220;intelligent&#8221; way.</p>

<h4 class="wp-block-heading">Prices</h4>

<p>Shortly AI is certainly useful for adding a sentence or two and overcoming writer&#8217;s block, but to let it do its own thing does not seem the best tool.</p>

<p>Nor is its cost negligible. We are talking about <strong>$79 per month</strong>, or <strong>$780 per year</strong>. I still used the free trial, so it is only required to register an account.</p>

<h3 class="wp-block-heading">WriterX</h3>

<p>Let&#8217;s look at <a href="https://writerx.co" target="_blank" rel="noreferrer noopener"><strong>WriterX</strong></a>. It has templates for creating articles, social posts, text conclusions and many others. I expect similar results to Shortly, not being designed for whole stories.</p>

<p>Unfortunately, however, the free trial version does not allow access to all features, and I didn&#8217;t feel like paying for a service (even a fairly expensive one) without first trying it out; even having a free alternative that I am partially satisfied with.</p>

<p>Although I am evaluating it for its useful marketing features&#8230; Like the bio of my social channels, or any future help in writing blog articles. I also used it to write the meta description of this same article.</p>

<h4 class="wp-block-heading">Prices</h4>

<p>WriterX costs <strong>$29 a month</strong> in the standard version (basically the trial I had), or <strong>$59</strong> <strong>a month</strong> to have unlimited text and functions. It is available in 25 languages.</p>

<h3 class="wp-block-heading">Jasper AI</h3>

<p>I also wanted to try <strong><a href="https://www.jasper.ai" target="_blank" rel="noreferrer noopener">Jasper AI</a></strong>, which is ultimately a graphical user interface for GPT-3 itself. It is perhaps the most publicized, found everywhere. But I simply haven&#8217;t even begun to use it: it forces you to enter your credit card even for the free version, and to verify it takes not a few cents but the entire fi<strong>rst month: $29</strong>. Unprofessional attitude, so I don&#8217;t want to deal with them and wanted my money back immediately.</p>

<h3 class="wp-block-heading">GPT-J and Writey AI</h3>

<p>To get out of the GPT-3 universe, I wanted to try the open source <strong>GPT-J</strong> via the <a href="https://6b.eleuther.ai" target="_blank" rel="noreferrer noopener">6b.eleuther.ai</a> website; however, it always crashed with the message, &#8220;Unable to connect to the model. Please try again&#8221;. And <strong><a href="https://writey.ai" target="_blank" rel="noreferrer noopener">Writey AI</a></strong>, also well-functioning but too specialized in writing blog articles. Which I recommend you check it out for, if only because of 5 articles a month totally free.</p>

<p>Ultimately, I am tired and any further research seems futile. After all, ChatGPT&#8217;s text is valid (if you can call it an &#8220;automatic&#8221; text), so I would say let&#8217;s move on to the technical realization of the short film.</p>

<h2 class="wp-block-heading">Creating 3D characters</h2>

<h3 class="wp-block-heading">Can an artificial intelligence generate 3D characters?</h3>

<p>So we need the characters for our story, but is it really possible to generate them with A.I.? <strong>Spoiler: today, not well</strong>. There are many promises and some solutions that come close to the result, but it is not yet possible. Let us look at them in brief, as they will be useful in the near future. But then we will go on to figure out how to have the characters in our story now with little money.</p>

<h3 class="wp-block-heading">PIFuHD</h3>

<p>To start, there is <a href="https://colab.research.google.com/drive/11z58bl3meSzo6kFqkahMa35G5jmh2Wgt#scrollTo=1TfPAtL4CyZw" target="_blank" rel="noreferrer noopener">PIFuHD</a>, which is already available to the public and promises to create a 3D character from a single photo. It works, but even from the presentation videos, a quality far from acceptable in the cinema is remarked upon.</p>

<h3 class="wp-block-heading">Google DreamFusion</h3>

<p><a href="https://dreamfusion3d.github.io" target="_blank" rel="noreferrer noopener"><strong>DreamFusion</strong></a> is one of the first A.I.s that can leverage 2D data from Stable Diffusion (the same data used to create photos using artificial intelligence, which is now widespread), to create 3D models.</p>

<p>Same problem as before: unsatisfactory quality even at first glance. </p>

<h3 class="wp-block-heading">Microsoft Rodin Diffusion</h3>

<p>A 2023 newcomer to Microsoft, <a href="https://3d-avatar-diffusion.microsoft.com" target="_blank" rel="noreferrer noopener">Rodin Diffusion</a> is not yet available to the public. It promises to create a realistic 3D avatar from a single photo. Well, judging from the photos released on their site the hair&#8230; Those are always the problem!</p>

<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="275" src="https://www.darioriccio.com/wp-content/uploads/2023/01/3d-avatar-diffusion-1-1024x275.jpg" alt="" class="wp-image-806" srcset="https://www.darioriccio.com/wp-content/uploads/2023/01/3d-avatar-diffusion-1-1024x275.jpg 1024w, https://www.darioriccio.com/wp-content/uploads/2023/01/3d-avatar-diffusion-1-300x80.jpg 300w, https://www.darioriccio.com/wp-content/uploads/2023/01/3d-avatar-diffusion-1-768x206.jpg 768w, https://www.darioriccio.com/wp-content/uploads/2023/01/3d-avatar-diffusion-1-1536x412.jpg 1536w, https://www.darioriccio.com/wp-content/uploads/2023/01/3d-avatar-diffusion-1-2048x549.jpg 2048w" sizes="(max-width: 1024px) 100vw, 1024px" /><figcaption class="wp-element-caption">Microsoft 3D Avatar Diffusion generative model</figcaption></figure>

<h3 class="wp-block-heading">Nvidia Get3D</h3>

<p>You certainly cannot miss the queen of graphics cards, Nvidia, among the 3D tools. And indeed its <a href="https://nv-tlabs.github.io/GET3D/" target="_blank" rel="noreferrer noopener"><strong>Get3D</strong></a> is superlative in creating 3D models from information learned from a dataset of 3D models.</p>

<p>The dataset is much more limited, and this is a disadvantage in the variety of 3D objects that can be realized. It is, however, open-source, a definite plus point, although the entire training must be done in one&#8217;s own system. This means high-end video cards galore&#8230; And they cost a little. Much.</p>

<h3 class="wp-block-heading">Nvidia Magic3D</h3>

<p>How did Nvidia solve the problem of the &#8220;limitation&#8221;, in terms of quantity, of objects that can be created by Get3D? As the article in <a href="https://the-decoder.com/nvidias-magic3d-turns-text-into-high-resolution-3d-objects/" target="_blank" rel="noreferrer noopener">The Decoder</a> explains, simply copying Google&#8230; And trying to make its new <strong><a href="https://deepimagination.cc/Magic3D/" target="_blank" rel="noreferrer noopener">Magic3D</a></strong> (<a href="https://arxiv.org/pdf/2211.10440.pdf" target="_blank" rel="noreferrer noopener">researchers&#8217; paper here</a>) faster and more defined than DreamFusion. This video explains well how it works:</p>

<figure class="wp-block-video"><video controls="" src="https://www.darioriccio.com/wp-content/uploads/2022/12/nvidia_magic3d.mp4"></video><figcaption class="wp-element-caption">Process of 3D object creation with Nvidia Magic3D from text</figcaption></figure>

<p>In practice, Get3D has a dataset based on other 3D models. Instead, Magic3D starts with images, themselves generated by A.I., paving the way for virtually infinite combinations. </p>

<p>Does it work well? What is certain is that it is not available to the public, but from the videos and examples on the Web it looks like a very promising technology. Although it still does not reach the necessary quality.</p>

<h3 class="wp-block-heading">StyleGAN-NADA</h3>

<p>For doing Pokemon or other fun things there is <strong><a href="https://stylegan-nada.github.io" target="_blank" rel="noreferrer noopener">StyleGAN-NADA</a></strong>, trained following OpenAI&#8217;s CLIP (Contrastive Language-Image Pre-Training) model. It allows you to create images from just a textual description, without the need to see any references and without the need to collect additional training data.</p>

<p>It is also possible to modify existing images to make them similar to those in other domains, for example, using an image of a dog to generate a cat. The same approach can be applied to other generative architectures, opening up interesting possibilities for creating images quickly and accurately.</p>

<h3 class="wp-block-heading">Pollinations</h3>

<p>Among the &#8220;next steps,&#8221; <strong><a href="https://pollinations.ai" target="_blank" rel="noreferrer noopener">Pollinations</a></strong> promises to do what we need it to do. From their website, &#8220;at the research level, our team is developing technology that allows people to generate 3D objects and avatars with the help of text alone&#8221;.</p>

<p>Here again, there is a wait. For now, it still allows you to do interesting things in the photo/video area. Maybe try it, however there is little of use for the purposes of this article.</p>

<h3 class="wp-block-heading">Text2mesh</h3>

<p>Small but interesting, <strong><a href="https://threedle.github.io/text2mesh/" target="_blank" rel="noreferrer noopener">Text2mesh</a></strong> is less of an exercise in style than its predecessors. Here you already have to have the model, but the AI promises to modify it independently; for example, increasing the number of polygons, changing its shape and color even creating the texture from scratch. All based on a text prompt, a written request. </p>

<p>Reminder to put in the diary in case we need it.</p>
<div class="wp-block-image">
<figure class="alignright size-medium"><img loading="lazy" decoding="async" width="111" height="300" src="https://www.darioriccio.com/wp-content/uploads/2023/02/lumaai-111x300.jpg" alt="" class="wp-image-828" srcset="https://www.darioriccio.com/wp-content/uploads/2023/02/lumaai-111x300.jpg 111w, https://www.darioriccio.com/wp-content/uploads/2023/02/lumaai.jpg 376w" sizes="(max-width: 111px) 100vw, 111px" /><figcaption class="wp-element-caption">Luma AI</figcaption></figure></div>
<h3 class="wp-block-heading">Luma AI</h3>

<p><a href="https://lumalabs.ai" target="_blank" rel="noreferrer noopener"><strong>Luma AI</strong></a> is an interesting project to scan real objects by recreating them in 3D. The operation is interesting, and the quality of the scans is reasonably good. For props or figures in the background, I consider it more than acceptable, even in production.</p>

<p>It has also recently allowed you to create objects, and thus characters, in 3D from a text prompt. With the classic &#8220;imagine&#8221; command, already made famous by the <em><a href="https://midjourney.com/home/?callbackUrl=%2Fapp%2F" target="_blank" rel="noreferrer noopener">MidJourney</a></em> image generator. But here, again, the quality is not sublime. Good experiments, but definitely not usable for production purposes. </p>

<h2 class="wp-block-heading">3D characters and objects without artificial intelligence</h2>

<p>From all of this we understood only one thing: A.I. as of today, January 2023, still does not allow us to have good 3D models. Since we have to keep the budget low, however, let&#8217;s achieve them with the tools already available.</p>

<h3 class="wp-block-heading">3D characters in our short film</h3>

<p>I won&#8217;t go into a lot of research here, but I evaluate two 3D character creation tools that I already know: the simple <a href="https://www.reallusion.com/character-creator/" target="_blank" rel="noreferrer noopener"><em>Reallusion</em> <strong>Character Creator</strong></a>, and <a href="https://www.unrealengine.com/en-US/metahuman" target="_blank" rel="noreferrer noopener"><em>Epic&#8217;s</em> fantastic <strong>MetaHuman</strong></a>.</p>

<p>We will need 3 characters for our story: <em>Maria</em>, <em>Jake</em>, and <em>Sarah</em>.</p>

<h3 class="wp-block-heading">MetaHuman Creator</h3>

<p>Actually, in the case of MetaHuman there is a change from my past: I used the beta of <strong>MetaHuman Creator</strong>. It is phenomenal in that it has given me the ability to take advantage of the computing power of Epic&#8217;s servers by creating characters in a work break directly with my laptop (which only needs to receive a video stream).</p>

<p>So by requesting &#8220;Early Access&#8221; with one&#8217;s Epic Games account from <a href="https://metahuman.unrealengine.com">metahuman.unrealengine.com</a>, we end up with a choice of possible characters. </p>

<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="570" src="https://www.darioriccio.com/wp-content/uploads/2022/12/metahuman-1024x570.jpg" alt="" class="wp-image-800" srcset="https://www.darioriccio.com/wp-content/uploads/2022/12/metahuman-1024x570.jpg 1024w, https://www.darioriccio.com/wp-content/uploads/2022/12/metahuman-300x167.jpg 300w, https://www.darioriccio.com/wp-content/uploads/2022/12/metahuman-768x427.jpg 768w, https://www.darioriccio.com/wp-content/uploads/2022/12/metahuman-1536x855.jpg 1536w, https://www.darioriccio.com/wp-content/uploads/2022/12/metahuman-2048x1139.jpg 2048w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>

<p>We imagine that all three are between the ages of 20 and 40, otherwise the script does not tell us much about them.</p>

<h4 class="wp-block-heading">Jake</h4>

<p>I choose to begin by selecting <em>Aoi</em>, as <strong>Jake</strong>. I don&#8217;t know, that beard gives me the idea of &#8220;adventurer.&#8221;</p>

<p>The software warns that some elements of the character has elements still under development (specifically hair) and therefore only LOD (level of detail) 0 (automatic) and 1 (highest quality) will be displayed. For us it is fine, the destination will be a pre-rendered video clearly at the highest quality and not a real-time video game.</p>

<p>A few changes to the character (shirtless, eye color, more &#8220;suitable&#8221; shoes and pants), and he is saved. Then we will export it with the Quixel Bridge plugin of Unreal Engine 5.</p>

<h4 class="wp-block-heading">Maria</h4>

<p>Let&#8217;s move on to <strong>Maria</strong>. I asked ChatGPT to come up with its characteristics, and the answer was that it could be a woman around 30 years old. Brown hair, shoulder-length and pulled back into a ponytail. Brown eyes, intense and deep, and of Latin ethnicity, with skin tanned from his outdoor adventures.</p>

<p>Let&#8217;s try to realize it. Let&#8217;s start from <em>Roux</em>. Let&#8217;s make a Blend with Lena, Kendra and Tori who seem suitable to modify her features and thus ethnicity a bit, give her a ponytail, brown eyes, modify the texture of her skin to give her a few more years, eliminate the make-up she would hardly have in the middle of the jungle, modify her clothing and that&#8217;s it.</p>

<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="576" src="https://www.darioriccio.com/wp-content/uploads/2023/01/Maria-1024x576.jpg" alt="" class="wp-image-808" srcset="https://www.darioriccio.com/wp-content/uploads/2023/01/Maria-1024x576.jpg 1024w, https://www.darioriccio.com/wp-content/uploads/2023/01/Maria-300x169.jpg 300w, https://www.darioriccio.com/wp-content/uploads/2023/01/Maria-768x432.jpg 768w, https://www.darioriccio.com/wp-content/uploads/2023/01/Maria-1536x864.jpg 1536w, https://www.darioriccio.com/wp-content/uploads/2023/01/Maria-2048x1152.jpg 2048w" sizes="(max-width: 1024px) 100vw, 1024px" /><figcaption class="wp-element-caption">Maria</figcaption></figure>

<h4 class="wp-block-heading">Sarah</h4>

<p>Finally <strong>Sarah</strong>. For ChatGPT he is about 25 years old, with short, wavy blond hair. Blue eyes, lively and inquisitive, around 1.70 meters, slender and muscular indicating an active and sporty person and of Northern European descent, with fair and delicate skin.</p>

<p>We rely on <em>Vivian</em>, various modifications until we make it something like the required. Clearly MetaHuman has many limitations, even more so in this online version. For example, on the body we have practically no possibility to operate, so &#8220;muscular&#8221; is a feature we will have to renounce, unless we model later. But, first I am not a 3D modeler; and this is a zero-budget project for educational purposes only. Also, for the same reason, it is not the case to waste more time on it than necessary.</p>

<p>Finally let&#8217;s leave them there; we&#8217;ll create the animations with mannequins and then we&#8217;ll retarget with MetaHuman characters directly in Unreal Engine 5.</p>

<h2 class="wp-block-heading">Conclusions</h2>

<p>Let&#8217;s limit ourselves here for today; in one of the next articles I will talk specifically about animation and Motion Capture with artificial intelligence (where it will be most useful to us), and then we will continue with environments, voices, music, and whatever else we need to finalize our little project.</p>

<p>I will give myself time to finish slowly, and possibly do other articles before continuing this one. For two reasons: they are elaborate operations, and artificial intelligence is in an explosive phase. An article written today may be old tomorrow. Maybe tonight.</p>

<p>Therefore, since we will need this information much more toward the end of this year for the actual production of a fulldome story, let us keep in mind all the news in the coming months.</p>

<p>As always, thank you for following me, and a hug.</p>
<p>L'articolo <a href="https://www.darioriccio.com/en/how-to-make-low-budget-films-with-artificial-intelligence-early-stages/">How to make low-budget films with Artificial Intelligence &#8211; Early stages</a> è apparso primo per <a href="https://www.darioriccio.com/en/">Future of Cinema - by Dario Riccio</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.darioriccio.com/en/how-to-make-low-budget-films-with-artificial-intelligence-early-stages/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
	</channel>
</rss>
