Remix.run Logo
names_r_hard 6 days ago

Thanks to all who are sharing their appreciation for this niche but cool project.

I'm the current lead dev, so please ask questions.

Got a Canon DSLR or mirrorless and like a bit of software reverse engineering? Consider joining in; it's quite an approachable hardware target. No code obfuscation, just classic reversing. You can pick up a well supported cam for a little less than $100. Cams range from ARMv5te up to AArch64.

GranPC 6 days ago | parent | next [-]

What's the situation re: running on actual hardware these days? I was experimenting around with my 4000D but when it came to trying to actually run my code on the camera rather than the emulator, a1ex told me I needed some sort of key or similar. He told me he'd sign it for me or something but he got busy and I never heard back.

Is this situation still the same? (Apologies for the hazy details -- this was 5 years ago!)

names_r_hard 6 days ago | parent [-]

That must have been a few years back. I think you're talking about enabling "camera bootflag". We provide an automated way to do this for new installs on release builds, but don't like to make this too easy before we have stable builds ready. People do the weirdest stuff, including trying to flash firmware that's not for their cam, in order to run an ML build for that different cam...

Anyway, I can happily talk you through how to do it. Our discord is probably easiest, or you can ask on the forum. Discord is linked from the forum: https://www.magiclantern.fm/forum/

Whatever code you had back then won't build without some updates. 4000D is a good target for ML, lots of features that could be added.

GranPC 6 days ago | parent [-]

Yes, this was in September 2020 according to my records. All I remember is that I could run the ROM dumper just fine, then I could run my firmware in QEMU, and then I just had to locate a bunch of function pointers to make it do anything useful. Worked in QEMU but that's where I got stuck - no way to run it on hardware.

I'll definitely keep this in mind and hit you up whenever I have a buncha hours to spare. :)

names_r_hard 6 days ago | parent [-]

That would have been only a little before a1ex left. Getting code running on real hardware is easy, maybe I'll talk to you in discord in a few months when you find this fabled free time we are all looking for ;)

The 4000D is an interesting cam, we've had a few people start ports then give up. It has a mix of old and new parts in the software. Canon used an old CPU / ASIC: https://en.wikipedia.org/wiki/Template:Canon_EOS_digital_cam...

So it has hardware from 2008, but they did update the OS to a recent build. This is not what the ML code expects to find, so it's been a confusing test of our assumptions. Normally the OS stays in sync with the hardware changes, which means when we're reversing, it's hard to tell which changes are which.

That said, 4000D is probably a relatively easy port.

grep_name 6 days ago | parent | prev | next [-]

Wow, newly supported models is super exciting to see! I have a 5d mk iii which I got specifically to play around with ML. I haven't done much videography in my life, but do plan to get some b-roll at the very least with my mk iii or maybe record some friends live events sometime.

> I'm the current lead dev, so please ask questions.

Well, you asked for it!

One question I've always wondered about the project is: what is the difference between a model that you can support, and a model you currently can't? Is there a hard line where ML future compatibility becomes a brick wall? Are there models where something about the hardware / firmware makes you go 'ooh, that's a good candidate! I bet we can get that one working next'?

Also, as someone from the outside looking in who would be down to spend $100 to see if this something I can do or am interested in, which (cheap) model would be the easiest to grab and load up as dev environment (or in a configuration that mimics what someone might do to work on a feature), and where can I find documentation on how to do that? Is there a compendium of knowledge about how these cameras work from a reverse-engineering angle, or does everyone cut their teeth on forum posts and official canon technical docs?

edit: Found the RE guide on the website, gonna take a look at this later tonight

names_r_hard 6 days ago | parent [-]

5D3 is perhaps the best currently supported ML cam for video. It's very capable - good choice. Using both CF and SD cards simultaneously, it can record at about 145MB/s, so you can get very high quality footage.

Re what we can support - it's a reverse engineering project, we can support anything with enough time ;) The very newest cams have software changes that make enabling ML slightly harder for normal users, but don't make much difference from a developer perspective. I don't see any signs of Canon trying to lock out reverse engineers. Gaining access and doing a basic, ML GUI but no features port, is not hard when you have experience.

What we choose to support: I work on the cams that I have. And the cams that I have are whatever I find for cheap, so it's pretty random. Other devs have whatever priorities they have :)

The first cam I ported to was 200D, unsupported at the time. This took me a few months to get ML GUI working (with no features enabled), and I had significant help. Now I can get a new cam to that standard in a few days in most cases. All the cams are fairly similar for the core OS. It's the peripherals that change the most as hardware improves, so this takes the most time. And the newer the camera, the more the hw and sw has diverged from the best supported cams.

The cheapest way for you to get started is to use your 5D3 - which you can do in our fork of qemu. You can dump the roms (using software, no disassembly required), then emulate a full Canon and ML GUI, which can run your custom ML changes. There are limitations, mostly around emulation of peripherals. It's still very useful if you want to improve / customise the UI.

https://github.com/reticulatedpines/qemu-eos/tree/qemu-eos-v...

Re docs - they're not in a great shape. It's scattered over a few different wikis, a forum, and commit messages in multiple repos. Quick discussion happens on Discord. We're very responsive there, it's the best place for dev questions. The forum is the best single source for reference knowledge. From a developer perspective, I have made some efforts on a Dev Guide, but it's far from complete, e.g.:

https://github.com/reticulatedpines/magiclantern_simplified/...

If you want physical hardware to play with (it is more fun after all), you might be able to find a 650d or 700d for about $100. Anything that's Digic 5 green here is a capable target:

https://en.wikipedia.org/wiki/Template:Canon_EOS_digital_cam...

Digic 4 stuff is also easy to support, and will be cheaper, but it's less capable and will be showing its age generally - depends if that bothers you.

Vagantem 6 days ago | parent | prev | next [-]

Just wanted to say thanks for keeping this alive! I used magic lantern in 2014 to unlock 4K video recording on my Canon. It was how students back then could start recording professional video without super expensive gear

dylan604 6 days ago | parent | prev | next [-]

I still shoot a 5Dmkii solely due to the ML firmware. It's primarily a timelapse camera at this point. The ETTR functionality is one of my absolute favorites. The biggest drawback I have is trying to shoot with an interval less than 5 seconds. The ML software gets confused and shoots irregular interval shots. Anything over 5 seconds, and it's great. No external timers necessary for the majority of my shooting. I do still have an external for when <5s intervals are necessary. I'm just waiting for the shutter to die, but I'm confident I'll just have it replaced and continue using the body+ML rather than buy yet another body.

Thanks for your work keeping it going, and for those that have worked on it before.

names_r_hard 6 days ago | parent [-]

Strange, it certainly can do sub 5s on some bodies. But I don't have a 5d2 to test with.

Could this be a conflict with long exposures? Conceivably AF, too. The intervalometer will attempt to trigger capture every 5s wall time. If the combined time to AF seek, expose, and finish saving to card (etc) is >5s, you will skip a shot.

When the time comes, compare the price of a used 5d3 vs a shutter replacement on the 5d2, maybe you'll get a "free" upgrade :) Thanks for the kind words!

dylan604 6 days ago | parent [-]

> Could this be a conflict with long exposures?

I've done lots of 1/2 second exposures with 3s interval, and it shoots some at much shorter interval than 3 and some 3+??? At one point, the docs said 5s was a barrier. Maybe it was the 5dmkii specifically. All of my cards are rated higher than the 5D can write (but makes DIT much faster) so I doubt it is write speed interfering. What makes me think it is not the camera is that using a cheap external timer works without skipping a beat.

names_r_hard 6 days ago | parent [-]

Yeah, the external timer behaviour is fairly strong evidence. Curious though. These cams all seem to have a milli- and micro-second hw clock, and can both schedule and sleep against either. But it's also true that every cam has some weird quirks. And I don't know the 5d2 internals well.

From what I've seen, the image capture process is state machine based and tries to avoid sleeps and delays. Which makes sense for RTOS and professional photography.

If you care enough to debug it, pop into the discord and I can make you some tests to run.

pixelmonkey 6 days ago | parent | prev | next [-]

I just want to say "thank you." I run Magic Lantern on my Canon 5D Mark III (5d3) and it is such awesome software.

I am a hobbyist nature photographer and it helped me capture some incredible moments. Though I have a Canon R7, the Canon 5d3 is my favorite camera because I prefer the feel of DSLR optical viewfinders when viewing wildlife subjects, and I prefer certain Canon EF lenses.

More here:

https://amontalenti.com/photos

When I hang out with programmer friends and demo Magic Lantern to them, they are always blown away.

names_r_hard 6 days ago | parent [-]

You're a better photographer than I am. I'm glad if ML helped you.

Please recruit your programmer friends to the cause :) The R7 is a target cam, but nobody has started work on it yet. There is some early work on the R5 and R6. I don't remember for the R7, but from the age and tier, this may be one of the new gen quad core AArch64.

I expect these modern cams to be powerful enough to run YOLO on cam, perhaps with sub 1s latency. Could be some fun things to do there.

pixelmonkey 6 days ago | parent [-]

I've always wanted to work on Magic Lantern myself (I am in the Discord) but just haven't found the time yet! Thanks again!

ASlave2Gravity 6 days ago | parent | prev | next [-]

Hey just want to say a massive thank you for everything you've done with this project. I've shot so much (short films, music videos, even a TV pilot!) on my pair of 600Ds and ML has given these cams such an extended life.

It’s been a huge blessing!

fooker 6 days ago | parent | prev | next [-]

I recently obtained an astro converted 6D. Have played around with CHDK a long time ago as a teenager but never magic lantern.

I am a compiler dev with decent low level skills, anything in particular I should look at that would be good for the project as well as my ‘new’ 6D? (No experience with video unfortunately)

I have a newer R62 as well, but would rather not try anything with it yet.

names_r_hard 6 days ago | parent [-]

Ah I'd love an astro conversion.

I've had a fun idea knocking around for a while for astro. These cams have a fairly accessible serial port, hidden under the thumb grip rubber. I think the 6D may have one in the battery grip pins, too. We can sample LV data at any time, and do some tricks to boost exposure for "night vision". Soooo, you could turn the cam itself into a star tracker, which controlled a mount over serial. While doing the photo sequence. I bet you could do some very cool tricks with that. Bit involved for a first time project though :D

The 6D is a fairly well understood and supported cam, and your compiler background should really help you - so really the question is what would you like to add? I can then give a decent guess about how hard various things might be. I believe the 6D has integrated Wifi. We understand the network stack (surprisingly standard!) and a few demo things have been written, but nothing very useful so far. Maybe an auto image upload service? Would be cool to support something like OAuth, integrate with imgur etc?

It's slow work, but hopefully you don't mind that too much, compilers have a similar reputation.

fooker 6 days ago | parent [-]

> turn the cam itself into a star tracker

Hmm, that's a neat idea. The better language for it is 'auto guider'. Auto guiding is basically supplying correction information to the mount when it drifts off.

Most mounts support guiding input and virtually all astrophotographers set up a separate tiny camera, a small scope, and a laptop to auto guide the mount. It would be neat for the main camera to do it. The caveat is that this live view sampling would add extra noise to the main images (more heat, etc). But in my opinion, the huge boost in convenience would make that worth it, given that modern post processing is pretty good for mitigating noise.

The signals that have to be sent to the mount are pretty simple too, so I'll look at this at some point in future. The bottleneck for me is that I have ever got 'real' auto guiding to work reliably with my mount so if I run into issues it would be tricky as there's no baseline working version.

> Maybe an auto image upload service?

This sounds pretty useful, even uploading seamlessly to a phone or laptop would be a huge time saver for most people! I'll set up ML on my 6D and try out some of the demo stuff that use the network stack.

Is there a sorted list of things that people want and no one has got around to implementing yet?

names_r_hard 6 days ago | parent [-]

I am definitely an astro noob :) LV sampling was just the first idea I thought of. We could also load the last image while the next was being taken, and extract guide points from that (assuming an individual frame has enough distinct bright points... which it might not... you could of course sum a few in software). It's a larger image, but your time constraints shouldn't be tight. That way you're not getting any extra sensor heat. Some CPU heat though, dunno if that would be noticeable.

For networking, this module demonstrates the principles: https://github.com/reticulatedpines/magiclantern_simplified/...

A simple python server, that accepts image data from the cam, does some processing, sends data back. The network protocol is dirt simple. The config file format for holding network creds, IP addr etc is really very ugly. It was written for convenience of writing the code, not convenience of making the config file.

You would need to find the equivalent networking functions (our jargon is "stubs"). You will likely want help with this, unless you're already familiar with Ghidra or IDA Pro, and have both a 6D and 200D rom dump :) Pop in the discord when you get to that stage, it's too much detail for here.

There's no real list of things people want (well, they want everything...). The issues on the repo will have some good ideas. In the early days of setting that up I tagged a few things as Good First Issue, but gave up since it was just me working on them.

I would say it's more important to find something you're personally motivated by, that way you're more likely to stick with it. It gets a lot easier, but it doesn't have a friendly learning curve.

fooker 6 days ago | parent [-]

Does LV sampling work when ..say.. a 120 second image is being captured?

names_r_hard 6 days ago | parent | next [-]

I don't know of a way to do that. I don't think the cam will ever display an image on LV while a capture is in progress. The readout process from the sensor is fundamentally decoupled from the capture. You could probably interleave long exposures with short ones at greatly boosted ISO, and display only the short ones on LV.

I was assuming it would be possible to quite accurately model the drift over time, and adjust the model based on the last image. The model continuously guides the mount, and the lag in updates hopefully wouldn't matter - so you can use saved images, not LV. In fact, we can trigger actions to occur on the in memory image just before writing out.

fooker 5 days ago | parent [-]

> quite accurately model the drift over time

This indeed seems like something someone would have written software for!

6 days ago | parent | prev [-]
[deleted]
CarVac 6 days ago | parent | prev | next [-]

I would love to add it to my 1Ds3. I recall reading that once upon a time Canon wrote ML devs a strongly worded letter telling them not to touch a 1D, but a camera that old is long obsolete.

(I literally only want a raw histogram)

(I also have a 1Dx2 but that's probably a harder port)

dylan604 6 days ago | parent | next [-]

I have been toying with the idea of picking up an old 1D. I can't remember the guy's name that I saw do this, but he had his 1D modified to use a PL mount instead of an EF mount. Something about the 1D body (being thicker I guess) allowed for the flange distances to work out. He then mounted a $35,000 17mm wide angle to it. That lens was huge and could just suck in photons. With that lens, he could expose the night sky in 1/3 second exposures what would take multiple seconds on my gear. He mounted the camera to the front of his boat floating down river using night vision goggles to see where he was going. The images were fantastic. I always wanted to do something crazy like that

names_r_hard 6 days ago | parent | prev | next [-]

Canon have never had any contact with ML project for any reason, to the best of my knowledge. The decision to stay away from 1D series was made by ML team, I would say out of an abundance of caution to try not to annoy them.

omegacharlie 6 days ago | parent [-]

Might be time to reconsider. Canon are (supposedly) not planning any further flagship DSLRs and I see little wrong with modifying your own property.

Independent of that: how dangerous is ML dev to the cameras themselves (in terms of brick potential)? Permanently bricking a camera in the price range of the 1DX is not exactly my idea of a good time. :-)

names_r_hard 6 days ago | parent [-]

Over the years, a few devs have temp soft bricked cams, requiring various non-standard methods to restore them to working order. I think all attempts succeeded so far! I don't think any permanent physical damage has been triggered by devs. It is definitely a real risk, but we try to work with the OS when possible, and the OS was written to try to make these things hard.

I don't think I'd want to learn the ropes on a cam too expensive to psychologically say goodbye to. Maybe save that for the second port.

omegacharlie 5 days ago | parent [-]

Thank you for the insight. Best of luck to you and the future of the project!

dingaling 6 days ago | parent | prev [-]

The 1Ds3 still renders wonderful images but the UI feels so limited now. ML would transform it.

archerx 6 days ago | parent | prev [-]

I use magic lantern on my canon 650D to get a clean feed for my blackmagic ATEM. The installation was easy and everything works well.

Thank you and the magic lantern team!