Support FAQ
Sales/Partnership
Does Cinnafilm sell stand-alone Tachyon software (inclusive of transcoder)?
Cinnafilm’s enterprise transcoding software, PixelStrings Cloud and PixelStrings On-Prem, support all image processing functions of the Spacetime product family (Dark Energy, Tachyon, Wormhole).
Do you rent/lease your software?
Sold directly by Cinnafilm or through our resellers, Cinnafilm software is by the use (SaaS) or quarterly/annual subscription. Cinnafilm does not sell permanent licenses of our software.
Do you have expense-based solutions for your software?
All software sold directly by Cinnafilm or our resellers sell subscription or usage based software licenses, therefore all sales qualify as expense-based.
How do I become a Cinnafilm reseller?
Cinnafilm welcomes new resellers as they make sense from a geographical/territorial perspective. Please contact sales for more information.
Can my company integrate/OEM your software functionality?
Cinnafilm continually evaluates partnership opportunities to expand our marketplace presence. We’ll be happy to discuss any business development opportunities with you. Just drop us a note at [email protected].
Can I purchase a workstation from Cinnafilm?
No. Cinnafilm no longer integrates/sells workstations. However, our expertise in hardware is freely shared with anyone who asks.
Hardware/General Functionality
What platforms are your plug-ins developed for?
Cinnafilm has 4 image processing plug-ins for Enterprise transcoding platforms. The platforms and the plug-ins which are integrated are as follows:
- Cinnafilm PixelStrings – Tachyon, Tachyon Wormhole, Dark Energy, Dark Energy Xenon
- Dalet Amberfin – Tachyon
- Evertz Mediator X – Tachyon
- HS-ART Diamant – Tachyon, Dark Energy
- EVS XTAccess – Tachyon
- Imagine Selenio File – Tachyon, Dark Energy
- Telestream ContentAgent – Tachyon, Dark Energy
- Telestream Vantage – Tachyon, Dark Energy
- Telestream Cloud and Vantage Port – Tachyon cloud
What kind of GPU is required?
All Cinnafilm software is CUDA based, therefore only NVIDIA GPUs are supported. For the latest list of minimum and recommended GPUs, please click here.
What CPU/Memory/Storage/Networking are required?
PixelStrings, Cinnafilm’s enterprise-grade media transformation platform, will operate on any Intel-based Windows workstation/server whose chipset is no older than Xeon e-series V2 and has at least 24 gig of RAM. For the latest list of minimum and recommended hardware specifications, please click here.
What operating systems are supported?
Cinnafilm software runs on 64bit Windows operating systems. Windows 11 Professional for workstations, and Server 2019 standard or newer for enterprise server deployments.
Are there any 3rd party requirements?
All Cinnafilm software requires the following 3rd party software support:
- Microsoft Visual C++ 2008 SP1 Redistributable Package (x86)
Download from Cinnafilm Release FTP, Utilities folder - Microsoft Visual C++ 2008 SP1 Redistributable Package (x64)
Download from Cinnafilm Release FTP, Utilities folder - Latest NVIDIA Drivers
http://www.nvidia.com/Download/index.aspx?lang=en-us - WIBU CodeMeter Runtime-Kit 6.10a for Windows 32 and 64 Bit v5.22 or newer
http://codemeter.com/us/service/downloads.html
Dark Energy Professional also requires:
- Microsoft DirectX End-User Runtime Web Installer
https://www.microsoft.com/en-us/download/details.aspx?displaylang=en&id=35
Troubleshooting
I am getting a “no CUDA capable device” error.
No suitable GPU is found on the system. Please install a Maxwell series or newer NVIDIA GPU.
How do I solve a side-by-side configuration error when launching DE Pro?
When trying to launch DE Pro, if the “side-by-side” configuration error is shown, then install the “Microsoft Visual C++ 2008 SP1 Redistributable (X64)” software from the Required Software page.
When performing a regsvr32 on the DLL it is returning an error code of 0x80070005
This is fixed by running your command prompt as administrator. To run your command prompt in administrator mode, right-click on the CMD executable from the Start Menu and select “Run as Administrator.” This may be required even if you are logged in as administrator.
To prevent needing the “Run As Administrator” enter regedit and navigate to the following location and set the Enable LUA to 0.
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\policies\system
PixelStrings PLUS (SaaS)
How do I get started?
Head to https://app.pixelstrings.com or click “Login” at the top of this page. Register as a first-time user to create your account; then, you will be able to log in using the username and password you created.
I've tried to reset my password or verify my PixelStrings account but haven't received my verification emails
Sometimes with larger email providers (Google, Yahoo, Hotmail, etc.) these emails can end up getting flagged as spam or junk mail.
Is there an API for automated submission to PixelStrings?
PixelStrings users have access to our API for PixelStrings PLUS and PixelStrings FLEX. The APIs are very similar between cloud and on-prem versions, with full extensibility to the PixelStrings feature set. Users can find a link to our API documentation in the Settings section under the API tab. On-premises users will direct users to a localhost implementation of the documentation and PixelStrings SaaS users will be directed to an on-line version of the documentation.
Where do I store my assets?
PixelStrings (PxS) has managed storage for all plans or BYOS (Bring Your Own Storage) for paid subscriptions (currently only Amazon S3 storage is supported).
- PixelStrings Managed Storage: Upload your assets through the PixelStrings UI where they‘ll be securely held till you decide to delete them. Removing your assets will permanently remove them from PxS storage so be aware there is no way to recover your assets after they are deleted.
- Bring Your Own Storage (Amazon S3, Wasabi, Backblaze): PixelStrings will connect to your cloud storage, make a working copy long enough to convert the video with GPU resources in the cloud, place the output back in your storage, and delete all cached copies. This ensures that your assets and PxS-converted assets remain solely under your control. To be clear – you pay any of the storage/egress costs when using the BYO Storage method. Please contact your storage provider for any additional charges that will arise by connecting your storage to PixelStrings.
What storage do you support?
S3-compatible storage solutions.
Geographically, where is the magical Cinnafilm-Image-Processing compute done?
In AWS: in the Oregon (us-west-2), Virginia (us-east-1), and Ireland (eu-west-1) regions. We recommend that you locate your cloud storage in one of these zones to minimize egress costs that storage providers will charge you; We do not currently provide options for setting up PixelStrings in other regions or for a private cloud, but please contact [email protected] if you have a specific request.
Note: Be sure to store in one of the compute zones or the app will not connect.
So, tell me exactly what my money is getting me.
We charge by the output runtime minute only, rounded up to the nearest minute. If your video is 10 seconds long, we charge you for one minute – same for 59 seconds or 60 seconds. This cost is fixed, and includes all the compute, reverse egress, codecs, and conversion required to perform the process. The price also includes technical and workflow support to ensure what you receive the results you are expecting. It is truly a pay-as-you-go system, which is why it is so different than what has been done before.
Tachyon
Do I need to create a different workflow for every possible input, or do I create the workflows for outputs?
Tachyon is an outcome-based technology. So when you create your workflows, use the “ALLOW” features to enable Tachyon to automatically handle situations when they arise. For example, selecting “Allow Telecine Removal” will let Tachyon’s pattern match algorithms remove field-based conversion patterns automatically if and when they are encountered.
Do I need to create multi-pass (multi-render) workflows to create a mezzanine format, then create the target outputs when dealing with files that contain telecine or other patterns?
Tachyon’s internal buffers work as a mezzanine, storing the video essence determined on a scene-by-scene basis. Once the video essence is known, Tachyon automatically adjusts all output algorithms to create the best possible frames for each video essence. Two stage renders (i.e. one to remove telecine and then one to frame rate convert) are not needed with Tachyon.
How fast is Tachyon?
Tachyon processing speed is based on resolution and the type of GPU that is being used. Newer GPUs that pack more cores, higher memory bandwidth, and higher teraflops will significantly outperform older GPUs. Speed estimates below are for a NVIDIA Turing series Quadro 6000 GPU (non-Ampere based chipset)
SD Source
- SD target: Many times faster than real time
- HD target: Many times faster than real time
- UHD target: Real time
HD source
- SD target: Many times faster than real time
- HD target: Faster than real time
- UHD target: Real time
UHD source
- SD target: Faster than real time
- HD target: Faster than real time
- UHD target: Real time
Can I stack GPUs and make Tachyon faster?
No. Tachyon is designed to traverse a single GPU pipeline.
Does Tachyon only remove 2:3 telecine? What if my project also has 4:1 ratio progressive patterns? (4:1 refers to repeating every 4th frame of a 23.976 video so it plays in a 29.97i container)
Tachyon automatically looks for patterns to remove so there is nothing to specify other than “Allow Remove Telecine.” Tachyon will remove 4:1, 2:3, or any other pattern it encounters.
Does Tachyon remove duplicate frames?
Tachyon automatically looks for duplicate frames, and then if Motion Compensation is enabled, it will motion compensate the missing frames to maintain sync with audio and captions.
My cartoon/anime still has interlace artifacts after processing. What can be done about that?
Tachyon has a “Cartoon/Anime” setting that specifically favors the type of repeated frames found in anime. Turning this on, along with “Residual Combing Removal”, should remove any remaining interlacing. Keep in mind some anime have the interlacing baked into both upper and lower fields. When this happens, no automated solution in the world will fix the footage, and it will have to be rotoscoped.
I have compositing errors with telecine source material (telecine pattern cadence between matte and composite were not synchronized before they were combined) and I am trying to create a progressive version for web playback. When I remove telecine, I have interlacing in the areas of motion inside of the composite. Can this be fixed?
Typically no. Tachyon is a one-GPU-per-video transformation. Some transcoders (i.e. Wohler RadiantGrid) have introduced time-slicing, which chops a project up into many pieces and transcodes them simultaneously. In this instance, Tachyon would be invoked on each piece, and if there were enough GPUs to be paired with each time slice, every slice would be standards-transcoded simultaneously.
What are the limits of Tachyon's frame rate conversion of resolution?
There are no frame rate or resolution limits with Tachyon. The limits are solely based on what the container and codec can support.
When do I use the “Allow 2:2” and “Allow 2:3” settings?
Allow 2:2 and 2:3 are used when you want to preserve a project which was captured at a filmic frame rate. 23.976p/24p/25p are all filmic rates which have a distinctive amount of motion blur. By selecting “Allow 2:2” in workflows with an interlaced output, Tachyon will double the frames and place them into the upper/lower fields of each frame. This is typically used when going from a 23.976 or 29.97p source and going to a 25i target.
Allow 2:3 is typically used when going from 23.976 to 29.97i or from 25p to 29.97i. (25p to 29.97i requires Motion Compensation as well. The conversion will go from 25p to 23.976p, then 2:3 will be applied to the resultant 23.976p)
When do I use the “Frame Double” and “Allow 4:6” settings?
Analogous to “Allow 2:2” and “Allow 2:3” Frame Double and Allow 4:6 is for preserving the filmic look of lower frame rate sources at higher playback speeds. Frame Double is just that – we double the frames. This is ideal for 24p to 48p, 25p to 50p or 29.97p to 59.94p. Allow 4:6 is specifically for going from 23.976p to 59.94p.
Why would I want to use AUTO Motion Compensation settings versus specifying settings?
Since Tachyon analyzes every scene for the video essence frame rate, it is best to allow the motion compensation engine automatically adjust for when different video essences are tucked into a project unexpectedly. For example, a 25i project could be 50 fields, but there might be a 25 progressive segmented frame (PSF) section in the file. That 25 PSF section needs to be treated differently than the 50 fields section. In AUTO mode, Tachyon will make that adjustment. If motion comp settings are specified, there will be no adjustment for that 25 PSF section versus the 50 fields sections.
How do I up-res using Tachyon?
Allow 2:2 and 2:3 are used when you want to preserve a project which was captured at a filmic frame rate. 23.976/24/25p are all filmic rates which have a distinctive amount of motion blur. By selecting “Allow 2:2” in workflows with an interlaced output, Tachyon will double the frames and place them into the upper/lower fields of each frame. This is typically used when going from a 23.976 or 29.97p source and going to a 25i target.
How do I deinterlace using Tachyon?
Allow 2:3 is typically used when going from 23.976 to 29.97i or from 25p to 29.97i. (25p to 29.97i requires Motion Compensation as well. The conversion will go from 25p to 23.976p, then 2:3 will be applied to the resultant 23.976).
Dark Energy
How do I use Dark Energy?
Dark Energy is a very simple, but very powerful series of algorithms that are invoked by simply “enabling” it. We recommend leaving Dark Energy in Auto mode until you have a feel for how it will optimize images.
Can I use Dark Energy in the same workflow as Tachyon or Wormhole?
All Cinnafilm plug-ins are designed on the same code base. So when you invoke features from the different plug-ins, it is running in the same process on the GPU. If a project requires multiple plug-ins, there is no need to create separate workflows for performing Tachyon/Wormhole/DarkEnergy functions. Load it up and perform a single render.
How much slower is my transcoder going to run if I add Dark Energy to a workflow containing Tachyon?
Dark Energy will slow things down, but it is dependent on how many Tachyon processes are running as well as the raster size of the video file. See “How fast is Dark Energy” for a feel for how fast it will run on its own.
How fast is Dark Energy?
Dark Energy performance is fully dependent on the resolution of the image and the capability of the GPU installed. Keep in mind that like all other Cinnafilm products, Dark Energy only works in the uncompressed space, so bit depth also plays a major role in how fast Dark Energy will denoise images. We are assuming 10bit for the following estimations and a NVIDIA Turing series Quadro 6000 Turing generation (non-Ampere) GPU.
SD Source
- SD target: Many times faster than real time
- HD target: Faster than real time
- UHD target: Faster than real time
HD source
- SD target: Many times faster than real time
- HD target: Faster than real time
- UHD target: Real time
UHD source
- SD target: Faster than real time
- HD target: Faster than real time
- UHD target: Real time
Can I introduce more image texture and can I control the image texture?
Absolutely. Dark Energy’s virtual film development allows you to control the grain size, frame size (8mm to 100mm), amount of grain, and the hue of the grain. Start with the AUTO setting and then move up or down from the MEDIUM setting to achieve the look you want. (Medium maps to the AUTO setting).
Since MEDIUM maps to AUTO, does that mean Dark Energy’s noise reduction capability is the same in every file that is processed?
No. The AUTO settings in Dark Energy are calculated at a granular level – with analysis being taken many times PER SCENE. What medium may be for one part is not the same as the medium in another. This is because luminance changes affect noise, and it is necessary to be constantly analyzing for the optimal noise reduction settings. The algorithm settings to achieve MEDIUM noise reduction on one scene will certainly be different from the next.
What is the best way to dial in noise reduction, sharpening, and image texture settings?
Start with AUTO and process a small clip. If it looks great, you’re done! If you feel the noise is still too heavy, then move the Noise Reduction setting to High. If you feel the image is ringing, set the Sharpening setting to Low. If you feel you want a grittier look, decrease the size to Small and increase the Image Texture Amount to High. With AUTO mapping to Medium on all settings, Dark Energy is very easy to dial in to the exact look you need.
What makes the Dark Energy Texture Aware resolution scaling so good?
Dark Energy expertly removes noise prior to upres, ensuring only the image is scaled, not unwanted noise. However, you cannot leave the image alone once it is scaled, you must re-introduce “raster appropriate” texture that helps bring the warmth back to the image. To fail to re-introduce proper image texture will leave images looking dry/plastic, and not very believable.
How do I know when I have reduced my bitrate to optimal levels?
This is going to be a subjective answer as it is very hard to measure what “looks good” and what doesn’t.
Wormhole
What are the retiming limits of Wormhole for both compression and expansion?
There are no limits with Wormhole as we will allow users to reduce and expand run times to any amount. At some point, audio is unusable, and we automatically shut off the retiming of the audio (i.e. creating super slow-motion, or retiming an asset so that it is a montage). However most clients that are using this for the purpose of fitting a time slot or adding commercial breaks do not exceed 10% as the video and audio, even when retimed accurately with high quality, just doesn’t look or sound natural.
Does Wormhole create super slow motion?
Yes – anything ranging from 1-900% slower than the original. The quality of the super slow motion will be completely dependent on the original frame rate. For example, 24p can probably only be slowed down 100% before artifacts start impacting the quality, but a 125 FPS can easily be taken to 800% slow down.
Does Wormhole also retime/pitch correct the audio?
Yes
Does Wormhole also retime the closed caption track?
Yes. Embedded NTSC and side car captions are retimed. PAL/25 FPS captions are only ouput as a DXFP sidecar files.
Tachyon and IPx LIVE
Where can I purchase Tachyon LIVE?
Tachyon LIVE can be purchased directly from Cinnafilm or through Evertz. Please contact [email protected] for more information or you click here for more information about Evertz’ solution using Tachyon LIVE: https://evertz.com/products/
What is the difference between IPx LIVE and Tachyon LIVE?
Cinnafilm’s IPx LIVE is our IP streaming conversion tool created for extremely low latency live signals. Tachyon LIVE is a plug-in into IP streaming transcoding systems that is optimized for the lowest possible latencies to support near real-time conversion of video signals to/from any corner of the globe.
What protocols does IPxLIVE support?
NDI (switchable), SRT (H.264, HEVC, not switchable), 2110 (JPEG XS, uncompressed), Transport Stream (H.264/HEVC), TR07 (JPEG XS, not very switchable).
Does IPxLIVE support NMOS Session Control?
Yes.
Can I run Tachyon LIVE on-premises or in the cloud?
Tachyon LIVE running on Cinnafilm’s IPx LIVE can run on a virtual or physical LINUX server. It will run in any public cloud infrastructure or on premises. Tachyon LIVE in Evertz appliances are on-premises solutions only.