Skip to a section:
Can’t find the droids you’re looking for? More Options
Cinnafilm’s enterprise transcoding software, PixelStrings Cloud and PixelStrings On-Prem, support all image processing functions of the Spacetime product family (Dark Energy, Tachyon, Wormhole).
Sold directly by Cinnafilm or through our resellers, Cinnafilm software is by the use (SaaS) or quarterly/annual subscription. Cinnafilm does not sell permanent licenses of our software.
All software sold directly by Cinnafilm or our resellers sell subscription or usage based software licenses, therefore all sales qualify as expense-based.
Cinnafilm welcomes new resellers as they make sense from a geographical/territorial perspective. Please contact sales for more information.
Cinnafilm continually evaluates partnership opportunities to expand our marketplace presence. We’ll be happy to discuss any business development opportunities with you. Just drop us a note.
No. Cinnafilm no longer integrates/sells workstations. However, our expertise in hardware is freely shared with anyone who asks.
Cinnafilm has 3 plug-ins for Enterprise transcoding platforms. The platforms and the plug-ins which are integrated are as follows:
All Cinnafilm software is CUDA based, therefore only NVIDIA GPUs are supported. For the latest list of minimum and recommended GPUs, please click here.
PixelStrings, Cinnafilm’s enterprise-grade media transformation platform, will operate on any Intel-based Windows workstation/server whose chipset is no older than Xeon e-series V2 and has at least 24 gig of RAM. For the latest list of minimum and recommended hardware specifications, please click here.
Cinnafilm software runs on 64bit Windows operating systems. Windows professional for workstations, and Server 2012, 2016, 2019 standard for enterprise server deployments.
Yes. Please see https://cinnafilm.com/required-3rd-party-software/
No suitable GPU is found on the system. Please install a Maxwell series or newer NVIDIA GPU.
When trying to launch DE Pro, if the “side-by-side” configuration error is shown, then install the “Microsoft Visual C++ 2008 SP1 Redistributable (X64)” software from the Required Software page.
This is fixed by running your command prompt as administrator. To run your command prompt in administrator mode, right-click on the CMD executable from the Start Menu and select “Run as Administrator.” This may be required even if you are logged in as administrator.
To prevent needing the “Run As Administrator” enter regedit and navigate to the following location and set the Enable LUA to 0.
Head to https://app.pixelstrings.com or click “Login” at the top of this page. Register as a first-time user to create your account; then, you will be able to log in using the username and password you created.
Sometimes with larger email providers (Google, Yahoo, Hotmail, etc.) these emails can end up getting flagged as spam or junk mail.
PixelStrings users have access to our API for PixelStrings Cloud and PixelStrings On-Prem. The APIs are very similar between cloud and on-prem versions, with full extensibility to the PixelStrings feature set. Users can find a link to our API documentation in the Settings section under the API tab. On-premises users will direct users to a localhost implementation of the documentation and PixelStrings SaaS users will be directed to an on-line version of the documentation.
PixelStrings (PxS) has managed storage for all plans or BYOS (Bring Your Own Storage) for paid subscriptions (currently only Amazon S3 storage is supported).
Currently we support AWS S3 and PixelStrings Managed Storage. Soon we will add new options like Azure, Wasabi, and Google. The PixelStrings PaaS is a growing ecosystem, so look for new features regularly that improve operability and user experience in the cloud, such as the coming FileCatalyst data transfer systems.
In AWS: in the Oregon (us-west-2), Virginia (us-east-1), and Ireland (eu-west-1) regions. We recommend that you locate your cloud storage in one of these zones to minimize egress costs that storage providers will charge you; We do not currently provide options for setting up PixelStrings in other regions or for a private cloud, but please contact [email protected] if you have a specific request.
Note: Be sure to store in one of the compute zones or the app will not connect.
We charge by the output runtime minute only, rounded up to the nearest minute. If your video is 10 seconds long, we charge you for one minute – same for 59 seconds or 60 seconds. This cost is fixed, and includes all the compute, reverse egress, codecs, and conversion required to perform the process. The price also includes technical and workflow support to ensure what you receive the results you are expecting. It is truly a pay-as-you-go system, which is why it is so different than what has been done before.
No. Select just one conversion option or all of them together – the pricing is the same regardless of what options you pick. The only exception would be third-party services, such as IMF from CineCert, HDR with Dark Energy Xenon, and audio retiming with Skywalker Sound Tools, which will have an associated upcharge (still calculated by the minute).
Tachyon is an outcome-based technology. So when you create your workflows, use the “ALLOW” features to enable Tachyon to automatically handle situations when they arise. For example, selecting “Allow Telecine Removal” will let Tachyon’s pattern match algorithms remove field-based conversion patterns automatically if and when they are encountered.
Tachyon’s internal buffers work as a mezzanine, storing the video essence determined on a scene-by-scene basis. Once the video essence is known, Tachyon automatically adjusts all output algorithms to create the best possible frames for each video essence. Two stage renders (i.e. one to remove telecine and then one to frame rate convert) are not needed with Tachyon.
Tachyon processing speed is based on resolution and the type of GPU that is being used. Newer GPUs that pack more cores, higher memory bandwidth, and higher teraflops will significantly outperform older GPUs. Speed estimates below are for a NVIDIA Turing series Quadro 6000 GPU (non-Ampere based chipset)
Tachyon automatically looks for duplicate frames, and then if Motion Compensation is enabled, it will motion compensate the missing frames to maintain sync with audio and captions.
Typically no. Tachyon is a one-GPU-per-video transformation. Some transcoders (i.e. Wohler RadiantGrid) have introduced time-slicing, which chops a project up into many pieces and transcodes them simultaneously. In this instance, Tachyon would be invoked on each piece, and if there were enough GPUs to be paired with each time slice, every slice would be standards-transcoded simultaneously.
There are no frame rate or resolution limits with Tachyon. The limits are solely based on what the container and codec can support.
Allow 2:2 and 2:3 are used when you want to preserve a project which was captured at a filmic frame rate. 23.976p/24p/25p are all filmic rates which have a distinctive amount of motion blur. By selecting “Allow 2:2” in workflows with an interlaced output, Tachyon will double the frames and place them into the upper/lower fields of each frame. This is typically used when going from a 23.976 or 29.97p source and going to a 25i target.
Allow 2:3 is typically used when going from 23.976 to 29.97i or from 25p to 29.97i. (25p to 29.97i requires Motion Compensation as well. The conversion will go from 25p to 23.976p, then 2:3 will be applied to the resultant 23.976p)
Analogous to “Allow 2:2” and “Allow 2:3” Frame Double and Allow 4:6 is for preserving the filmic look of lower frame rate sources at higher playback speeds. Frame Double is just that – we double the frames. This is ideal for 24p to 48p, 25p to 50p or 29.97p to 59.94p. Allow 4:6 is specifically for going from 23.976p to 59.94p.
Since Tachyon analyzes every scene for the video essence frame rate, it is best to allow the motion compensation engine automatically adjust for when different video essences are tucked into a project unexpectedly. For example, a 25i project could be 50 fields, but there might be a 25 progressive segmented frame (PSF) section in the file. That 25 PSF section needs to be treated differently than the 50 fields section. In AUTO mode, Tachyon will make that adjustment. If motion comp settings are specified, there will be no adjustment for that 25 PSF section versus the 50 fields sections.
Allow 2:2 and 2:3 are used when you want to preserve a project which was captured at a filmic frame rate. 23.976/24/25p are all filmic rates which have a distinctive amount of motion blur. By selecting “Allow 2:2” in workflows with an interlaced output, Tachyon will double the frames and place them into the upper/lower fields of each frame. This is typically used when going from a 23.976 or 29.97p source and going to a 25i target.
Allow 2:3 is typically used when going from 23.976 to 29.97i or from 25p to 29.97i. (25p to 29.97i requires Motion Compensation as well. The conversion will go from 25p to 23.976p, then 2:3 will be applied to the resultant 23.976).
Dark Energy is a very simple, but very powerful series of algorithms that are invoked by simply “enabling” it. We recommend leaving Dark Energy in Auto mode until you have a feel for how it will optimize images.
All Cinnafilm plug-ins are designed on the same code base. So when you invoke features from the different plug-ins, it is running in the same process on the GPU. If a project requires multiple plug-ins, there is no need to create separate workflows for performing Tachyon/Wormhole/DarkEnergy functions. Load it up and perform a single render.
Dark Energy will slow things down, but it is dependent on how many Tachyon processes are running as well as the raster size of the video file. See “How fast is Dark Energy” for a feel for how fast it will run on its own.
Dark Energy performance is fully dependent on the resolution of the image and the capability of the GPU installed. Keep in mind that like all other Cinnafilm products, Dark Energy only works in the uncompressed space, so bit depth also plays a major role in how fast Dark Energy will denoise images. We are assuming 10bit for the following estimations and a NVIDIA Turing series Quadro 6000 Turing generation (non-Ampere) GPU.
Absolutely. Dark Energy’s virtual film development allows you to control the grain size, frame size (8mm to 100mm), amount of grain, and the hue of the grain. Start with the AUTO setting and then move up or down from the MEDIUM setting to achieve the look you want. (Medium maps to the AUTO setting).
No. The AUTO settings in Dark Energy are calculated at a granular level – with analysis being taken many times PER SCENE. What medium may be for one part is not the same as the medium in another. This is because luminance changes affect noise, and it is necessary to be constantly analyzing for the optimal noise reduction settings. The algorithm settings to achieve MEDIUM noise reduction on one scene will certainly be different from the next.
Start with AUTO and process a small clip. If it looks great, you’re done! If you feel the noise is still too heavy, then move the Noise Reduction setting to High. If you feel the image is ringing, set the Sharpening setting to Low. If you feel you want a grittier look, decrease the size to Small and increase the Image Texture Amount to High. With AUTO mapping to Medium on all settings, Dark Energy is very easy to dial in to the exact look you need.
Dark Energy expertly removes noise prior to upres, ensuring only the image is scaled, not unwanted noise. However, you cannot leave the image alone once it is scaled, you must re-introduce “raster appropriate” texture that helps bring the warmth back to the image. To fail to re-introduce proper image texture will leave images looking dry/plastic, and not very believable.
This is going to be a subjective answer as it is very hard to measure what “looks good” and what doesn’t.
There are no limits with Wormhole as we will allow users to reduce and expand run times to any amount. At some point, audio is unusable, and we automatically shut off the retiming of the audio (i.e. creating super slow-motion, or retiming an asset so that it is a montage). However most clients that are using this for the purpose of fitting a time slot or adding commercial breaks do not exceed 10% as the video and audio, even when retimed accurately with high quality, just doesn’t look or sound natural.
Yes – anything ranging from 1-900% slower than the original. The quality of the super slow motion will be completely dependent on the original frame rate. For example, 24p can probably only be slowed down 100% before artifacts start impacting the quality, but a 125 FPS can easily be taken to 800% slow down.
Yes. Embedded NTSC and side car captions are retimed. PAL/25 FPS based captions currently are not retimed.