-
Notifications
You must be signed in to change notification settings - Fork 263
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CEA-608 closed captions #11
Comments
Was just about to request this. You beat me to this. Either this or Apple WebVTT way would be great. |
Vote up for this. |
sample streams welcomed |
Hi What kind of samples do you need. You can find one example of HLS captioning here: http://www.cpcweb.com/webcasts/webcast_samples.htm#HLS |
Okay, i've done a ton of research on this. When you're parsing an AVC PEC packet, there's an additional packet type for SCTE 128 signals, with a value of 6. These packages are known as "Supplemental Enhancement Information" or SEI. These can be many things, including 608/708 caption data. To confirm it's 608/708 captions:
to distinguish between CC_DATA and "bar" data, grab the next 1 byte. 3 is CCs, 6 is "bar" If it's "3" then you need to determine how much CC data is in the SEI, to do this, you grab the next 8 bits, but zero out the first 3 bits, no matter what they are, and convert to an integer (i.e. you're just using 000 and the last 5 bits of the byte) - This value tells you how many CC s there are, which you can use to calculate the number of bytes needed: 2 + count * 3, since each cc item is 3 bytes, and there's two bytes of headers From here, i'm pretty sure we just need to convert the cc_data bytes (not the rest of the bytes we had to parse before that) into Base64, and dispatch them via an onCaptionInfo event, with a payload of: { At this point, i believe we can simply leverage a 3rd party OSMF live CC plug-in. I figured this out from: Then you can test it out using Adobe's CC lib by downloading a free trial of Adobe Media Server, and looking for OSMFCCLib.swc. It works with HDS, and it parses the 608/708 stuff for you, as long as the onCaptionInfo events are fired from the stream. I've got a modified version of TSDemuxer that does all of the parsing, but I can't figure out how to add / listen to the FLVTags, or if i'm creating them correctly. It's just a new frame.type section in _parseAVCPES, and a Base64 method from Flex. What's the best way to review this with you? |
I've got an experimental version that inserts onCaptionInfo calls into HLSNetStream. They're being dispatched fine, and Adobe's OSMFCCLib kind of parses, them, but i'm seeing the text get a little garbled. I'm wondering if maybe they're being dispatched in the wrong order, because you can recognize the letters, but they're mixed up a bit. |
I've been testing with the m3u8 above: http://www.cpcweb.com/webcasts/hls/cpcdemo.m3u8 |
nice ! have you been able to sort out your garbled text issue ? |
Not yet... |
I'm taking another stab at this. It's going to require a base64 library. is there one you prefer to use? |
hi @jlacivita great ! |
Okay, i'm pretty sure i'm inserting the 608 captions into the stream correctly. However, I suspect there's a timing issue with what order they come down in. There's a few things I'm blocked on:
I am still seeing the garbled Captions, but they're appearing at about the right time, and the garbled letters do resemble the expected captions, they just in the wrong order or something. The strange thing is that occasionally, if my CPU is stressed, the captions show up properly... as if missing frames somehow gets the onCaptionInfo messages sorted properly. Any thoughts? |
Hi @jlacivita , seeing the code would help. any pointer ? |
http://demo.theplatform.com/pdk/flashls/TSDemuxer.as.zip Forked about a week ago. |
you should merge this commit : a0d6aa6, if you don't you will be subject to potential issues. |
Hi I merged the commit, with no change. I also removed the timestamp computation, and just use pes.pts for all of the FLVTags. It's hard to say if they are garbled during parsing, as i'm not parsing them completely, just enough to insert them into the stream. I suppose that is my next step... |
Okay, new information. The CC's don't start getting garbled until the entire M3U8 is buffered, so something is going wrong once buffering is done. If i throttle my connection to prevent the buffering from completing before the subtitles, then they look perfect every time. Any idea what code could be running at the end of buffering to break the onCaptionInfo messages? |
:( that's not actually the case. I do know that every once in a while, the captions start rendering properly without changing the code. It's still a mystery as to why. |
Would love to see this working! |
I second that in a big way ;) On Fri, May 29, 2015 at 9:47 AM, bwalls [email protected] wrote:
*[ David Hassoun | Principal ] *[ Adobe Certified Master Instructor ] |
Not trying to get this off track but just curious what you guys are using get 608 captions into your feeds. I'm not aware of any open source solutions. We currently use FFmpeg and I know for sure it doesn't have this capability. |
I work for NASA. We do frequent live shows that are captioned in real time through external providers, and inserted into the video stream by something like an Evertz or EEG caption inserter. The content either goes to NASA TV (we have three streams that go our 24/7 - see http://www.nasa.gov/multimedia/nasatv/index.html, and to Ustream) or in some cases to other internal or external websites. The NASA TV content that's not live is played out from a broadcast-style playout server and sent to satellite, with embedded captioning support. The NASA TV streams treat it all as live content that we either grab just before it goes to satellite, or pulled off satellite and encoded. We are required to provide captioning by Section 508 of the Rehabilitation Act, and prefer to use the same method for Broadcast and Web delivery. Live events not going to NASA TV is where it would be great to have a free player that supports HLS or HLS in Flash and closed captions. The live encoding typically uses hardware encoders with SDI input to maintain the metadata. So Elemental or Inlet encoders for NASA TV, TriCasters and various other gear in the field. I don't know of any software encoders that currently support the 608/708 metadata for live capture. For produced content, Adobe Premier allows editing and embedding of captions. Other non-linear editors have solutions with plug-ins. If we're only delivering to the web, then WebVTT or SRT or some other sidecar is fine, but we often need to play that same content over internal cable systems, so again embedded is nice. And it's good to have the captions embedded when folks download the files. The downloader may not need or care about captions, but someone they send it to may. I don't know of any open source solutions, either, at least nothing easy or reliable. |
@mangui I'm not able to figure this one out on my own, but this is a very important feature for an HLS client. Live content is almost exclusively encoded with CEA 608/708, and it's often preserved when the content moves over to VOD. Since CEA 608/708 is part of the TS fragments, there's no easy way for code outside of flashls to decode and sync them. As a replacement for the Any chance we could get this added to flashls in an upcoming release? |
Vote up. Would be a great help. Sounds like @jlacivita is close. Just needs a little support to get it performing smoothly. |
I'd also love to see this integrated. I've merged jlacivita's code into my clone of your dev branch, and added an HLSEvent for CLOSED_CAPTION_INFO. I've checked that I'm getting the same Base64 encodings as I get from @jlacivita's branch. @mangui, do you want a pull request, or do you prefer to take this from @jlacivita? @jlacivita, what do we need to do with the encoded data block in order to see the actual Closed Captions? It looks like your code just currently logs the encoded string to the browser console. Let me know if I can help. |
@ gyachuk It's a matter of integrating the captioning library from Adobe
|
@gyachuk plz submit a PR, it will be easier to review, tks |
Thanks @kfeinUI. I see that there is already an OSMF.swc in flashls/lib/osmf. Is that the one to use? I've also downloaded Adobe Media Server and have found OSMFCCLib.swc down in the Linux samples folder. Perhaps that's the right one to use? Haven't found much useful documentation on how to use it (in the context of flashls). Any pointers would be greatly appreciated. Thanks. |
Also, I realize that this might not be exactly the right forum for this, since I'd most likely be using it in the host application (mediaelements). |
Since this page is the place for all things related to flashls + CEA-608, here's a link to my pull request, focussed entirely on this issue: I've wired up onCaptionInfo events, and the OSMFCCLib renders them, but it seems that some of the characters are missing and i'm stuck as to why. I need help to get this working before we can actually do the pull. |
Anyone following this, i think we're very close on the parsing side. If you've got a view to display them you've got 608/708 captions working with flashls: |
Excellent work! I think that’s a big deal in the accessibility and government worlds.
|
Great! Sounds like you've got the core in place and made it past the major hurdle. Looking forward to being able to integrate it. |
Great work! Any chance this is coming soon? |
@mangui, I'm back to fooling around with closed captions. Nice to see that you've pulled code from @jlacivita. I don't have the luxury of using OSMFCLib, so I need to get the data that @jlacivita is putting into the FLV tag. I'm wondering if you have thoughts on how you'd like to have this done? I thought at first that I'd be able to do something like this: var _hls:HLS = new HLS(...);
_hls.stream.addEventListener("onCaptionInfo", _captionInfoHandler); but I never get the event. Looking at your implementation of HLSNetStream, you seem to be registering callbacks on the _client, so I tried this: var _hls:HLS = new HLS(...);
_hls.client.registerCallback("onCaptionInfo", _captionInfoHandler); but it seems that _hls.client is always null. And besides, you seem to be overriding the NetStream client getter to return _client.delegate, which I'm not sure gets me any further. What I've done is add the following code to HLSNetStream: public function HLSNetStream(...) {
...
_client.registerCallback("onCaptionInfo", _captionInfoHandler);
}
private function _captionInfoHandler(data: Object) : void {
_hls.dispatchEvent(new HLSEvent(HLSEvent.CLOSED_CAPTION, data.data));
} (where I've added the new HLSEvent.CLOSED_CAPTION and associated event dispatching) Does this seem like a reasonable approach, is is there some better way to leverage _hls.stream? |
@gyachuk You should definitely be able to get the events, since that's how they get into any library. I think to ensure you add the event listener at the correct time, you need to use Traits:
Let me know if you start seeing the events! |
@jlacivita Thanks for the response and guidance. Like I said, I don't have the luxury of using OSMFCCLib. That's because the player I'm using isn't based on OSMF. I don't have a MediaPlayer, so I need to get the data from the HLS instance that is being created. I can get to the underlying HLSNetStream using the getter HLS::get stream(), but I can't seem to get any events directly from that. That's what I tried in the first block of code in my previous question. Hey, while I've got your attention ( ;-), I notice that you Base64.encode() the CC triplets before putting them into the FLV tag. Is there a reason they can't just be the raw data? |
BTW, I've modified HLSNetStream to dispatch the CC triplets (similar to how the ID3 data is dispatched), and am sending them through my decoder. I haven't seen any problems with the data being stuffed into the FLV tag. Really great job on that. I'm just wondering if there's already an existing way to get the data out of the stream, without OSMF. |
Hmm, i'm less familiar with doing it outside of OSMF, but i believe you just need to create a client Object (no particular interface needed) and give it a method called onCaptionInfo, and then call netStream.client = ... The problem you might run into is that flashls already has it's own client, and netstreams can only have one client at a time. You probably did the right thing by adding the dispatchers in the same place as ID3 |
Does anybody has an example with live streaming? |
Maybe this is an old issue, |
Jw player is what we are using. |
Can you please elaborate more on JWPlayer? what license / package do I need that support the CEA 608 subtitles? |
It's whatever the $300 a year one is. If you compare the versions it'll mention cea-608 support |
I believe CEA-608 support for HLS requires the Enterprise license. Premium gives you HLS streaming, and it gives you caption support in the Flash player, but you need Enterprise to get the caption support in HLS.
|
Nope. We do hls with cea-608 and it works fine in flash player |
Thank you, we try to use it and seems that we managed to solve everything with JW player |
It would be great to have support for embedded closed captions, specifically CEA-608.
For reference, here are details on JWPlayer's version:
http://support.jwplayer.com/customer/portal/articles/1430278-cea-608-captions
The text was updated successfully, but these errors were encountered: