16 Comments
Apr 7Liked by Cameron R. Wolfe, Ph.D.

Thanks for this fantastically detailed write-up!

Since I am from a computer vision background I have seen MoEs used for a different modality than text. I have seen them being used "conditionally fuse" information based on the "quality and content" of various inputs.

Imagine a CNN that does semantic segmentation of a scene with multi-modal inputs such as RGB images, infrared image, etc. The model then learns to "weigh" the output of each modality branch. The weighting is conditioned on the inputs. So if the RGB image is washed out due to high exposure because your RGB camera is facing the sun, the model can give the RGB branch a lower weight and prefer information from other branches to produce the segmentation mask output.

Expand full comment
Mar 28Liked by Cameron R. Wolfe, Ph.D.

Cameron, thiscwas a really excellent overview- shows your impressive command of the materials. Would love to see a book by you on the topic.

Expand full comment
Mar 21Liked by Cameron R. Wolfe, Ph.D.

Absolutely fantastic article, thank you!

Expand full comment
Mar 20Liked by Cameron R. Wolfe, Ph.D.

Thank you for sharing this

Expand full comment
Mar 19Liked by Cameron R. Wolfe, Ph.D.

Great!

Expand full comment
Mar 18Liked by Cameron R. Wolfe, Ph.D.

This was great for me, thanks Cameron, you went ALL out! I’ve invested (year ago) in an MOE network called BitTensor and thought I knew this. Did not but do now. I’m not exactly sure if they’re MOE still, are you familiar with this network, and if so any thoughts on the mechanism underlying it? There are a number of very highly qualified AI groups building on it. I would like to as well but haven’t learnt enough yet.

Expand full comment

Yes, very much so, if you find it compelling enough and are available - I’m looking for an AI consultant to hire regarding building my position on the network (fine-tuning, host models).

Expand full comment