Character songs are a goldmine. They give songwriters a chance to write something they like without the pressure of selling and without compromises seiyuu albums have (where singing > music, always).
GochiUsa has more than 50 CDs released ATM and counting, which have a lot of good songs there that hardly anyone except fans will hear.
Then there's Haganai, where in a bonus song Kanahana and Iguchi Yuuka call each other slurs for 5 minutes. That lyricist went on to write most SAO character songs.
Love Live is a disgusting, career-destroying, super-commerialized representation of everything wrong with the idol industry. "Let's offer a bunch of new voice actresses the chance to be idols, make them withdraw from voice acting, overwork them, and when they're too old and shaggy we'll just f̶i̶r̶e̶ graduate them back to a now non-existing career."
Songs - waste of good songwriters forced to write crap.
Anime - cliche melodrama central.
They can't afford to do anything different and be hated.
Here's a fun question: what colorspace are bitmap subtitles meant to be at? Nope, its not signalled.
For PGS you have to derive that from the resolution with the naive belief nobody will master them in anything other than BT709 for 720p and higher.
Bitmap subs were a terrible idea for DVDs, and even though they had a chance to fix it, they kept it for BDs.
Instead of a <custom bitmap atlas> and transmitting <indices to the atlas> just use OTF/TTF. Rasterization is cheap, will always look better than bitmap, and you can still send <custom unicode>.
And then the subs can be extracted without OCR (seriously, OCR to recover subtitles!) and the "intellectual property stealking sub websites" won't be filled with OCR typos.
I'm pretty sure now that a good quality white chocolate tastes the best to me.
Normal milk chocolate can be great, but its hard to find a good ratio. With orange flavor can taste incredible.
Dark is acquired taste like coffee and some sort of loudness war is going on where only 90%+ dark is considered good pure real chocolate by some.
Never managed to try ruby chocolate yet, aside from in kitkats, it was okay.
Fun fact, some countries don't consider white chocolate actual chocolate. Bureaucrats.
Opus encodes 48Khz signals, period. It may internally resample because of layers and other things (low bitrate/voip only), but it has to output at 48Khz.
Ogg's header has a sample rate field. What does opusenc put in that field? The original samplerate of the file.
What does opusdec do with that field. Downsample/upsample to that rate. For no reason. Not in spec. Nothing else does this, because containers lie and you can't trust them, especially ogg.
This leads to some confused users. Grrr.
Found some more unintentional #glitchart.
I think this one had to do with a botched shift from 8bit to signed 16bit. And some sort of chroma desync too.
Left: using embedded ICC profile
Right: not using the embedded ICC profile and assuming its sRGB.
Yes, they can appear in JPEG (rarely), PNG (often), TIFF (very often), WEBP (never, don't use webp), JPEG2000 (aliens), mov/mp4 (never). They can do anything to the colors.
If you want proper, reproducible colors use a player that supports using both an embedded ICC profile and a display-specific profile, like mpv. And I don't think any other viewer/player supports such. But editors like gimp do.
The Revenant from Doom 3 has IMO the best monster design out of any game or other I've seen.
You can create as many variations of a medieval dragon as you'd like but in the end it'll still be a boring old dragon. Or you can copy trolls and make them as big as you'd like but they'll still be a cyberdemon.
The Revenant is a rocket-backpack-laucher wielding skeleton with a semi-translucent skin that makes terrifying growls and has a very damaging claw attack. That's originality you don't see often.
"Why didn't JPEG2000 take off?", but answered with rhetoric technical questions:
Why did they use the world's slowest entropy coding system and made it slower?
Why use wavelets when they're far worse than a DCT for frequency decomposition and because of this create bad artifacts and blur when highly quantized?
Why did they use wavelets when they knew this prevents easy pixel domain prediction, something video codecs and PNG proved works great.
Why is the bitstream batshit crazy?
Codec researcher, x86 assembly and Vulkan expert. As expected.
Had nothing to do with x264. Most unexpected.
A Mastodon instance for people interested in multimedia, codecs, assembly, SIMD, and the occasional weeb stuff.