Skip to content

Conversation

@Utsal20
Copy link

@Utsal20 Utsal20 commented Jan 23, 2020

Things left to do:

  • Modify logic to read and write to/from an AWS bucket
  • Make the implementation more robust
  • Possibly add tests

@Utsal20 Utsal20 requested a review from ramnanib2 January 23, 2020 05:22
logger.info("Starting Transcription of Video File: %s" % video_file)
comment = transcribe_video_file(video_file)
output[video_file] = comment
convert_transcribe_to_srt(video_file)
Copy link
Collaborator

@ramnanib2 ramnanib2 Jan 29, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be nice to do a best-effort post-processing of comments within a try-catch block, so that if the post-processing of a single transcript fails we can still move on and complete the rest.

else:
end = format_time(items[len(items)-1]['end_time'])

with open(transcript_file_name_from_video_file_name(video_file).replace('.json', '.srt'), 'w', encoding='utf-8') as f:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ideally we'd like to create one new srt file and one new txt file containing the topmost ranked transcript.

logger.info("Conversion to srt started for video file: %s" % video_file)
with open(transcript_file_name_from_video_file_name(video_file), encoding='utf-8') as f:
raw = json.load(f)
items = raw['results']['items']
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if results are null or empty and items are null or empty, log the same and move on.

start = format_time(current)
if token['type'] == 'punctuation':
next_line = next_line[0:-1] + token['alternatives'][0]['content']
end = format_time(items[counter - 1]['end_time'])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the punctuation is at the beginning of the transcript, won't this throw an index-out-of-bounds exception ?

next_line = token['alternatives'][0]['content'] + ' '
current = float(token['start_time'])
else:
next_line += token['alternatives'][0]['content'] + ' '
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like any token with end_time - start_time > 5.0 will be written to the srt file. However, I'm not seeing how a sequence of tokens with smaller individual time spans will be strung together in a single sentence ?

Copy link
Collaborator

@ramnanib2 ramnanib2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great work ! Thanks for making the changes. Some minor comments. I think the logic within can be simplified a little within the items loop.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants