First of all, if you are not familiar with the technical terms, there’s some excellent information and samples here: http://100fps.com/
Nowadays, CRT TVs are no longer being manufactured. So it’s safe to assume that whatever content you wish to broadcast to your audience, the (overwhelming) majority of it will be consumed over flat panel screens (LCDs, tablets, PCs, Plasmas, etc.).
Given that flat panel TVs cannot play interlaced content, if you have interlaced content (e.g. 576i, 480i, 1080i) – it will be deinterlaced by those devices in real-time.
Now if you only have Hollywood movies, i.e. content that was shot with film cameras, it will be progressive and not interlaced, so you have no problem.
But if you are getting a lot of video content, which was shot on interlaced video cameras, either SD or HD (1080i), you have to deal with this annoying interlacement issue.
Everyone in the industry pretty much agrees by now that interlacing should be eradicated for good, as it is an old analog technology created because of ancient equipment limitations.
The majority of video cameras today are supporting purely progressive shooting, so 1080p becomes quite common.
The decision whether to deinterlace when encoding or not, should be based on your target audience devices.
If you are targeting strictly Over-The-Top customers which will view your content on PC screens or mobile devices – then you MUST deinterlace.
However, if your customers still hold CRT televisions – do not deinterlace.
If you choose to deinterlace, you also have to consider the quality of deinterlacing. When it comes to software, there are quick deinterlacers which give reasonable results, and there are slow, motion-compensated algorithms which produce superior quality (although it is questionable what percentage of your audience can actually tell the difference).
Another point to consider is how efficiently your encoder can handle interlaced streams. If your encoder is more efficient in progressive encoding, then run it through a reasonable deinterlacer, one that doesn’t add too much to the encoding time. It would give better results than encoding inefficiently in interlaced mode.
Keep in mind that when deinterlacing, as with ANY video processing, the image is being processed and will be forever changed from its source. If you can afford the storage, keep your masters untouched for future repurposing.