Since I am from a computer vision background I have seen MoEs used for a different modality than text. I have seen them being used "conditionally fuse" information based on the "quality and content" of various inputs.
Imagine a CNN that does semantic segmentation of a scene with multi-modal inputs such as RGB images, infrared image, etc. The model then learns to "weigh" the output of each modality branch. The weighting is conditioned on the inputs. So if the RGB image is washed out due to high exposure because your RGB camera is facing the sun, the model can give the RGB branch a lower weight and prefer information from other branches to produce the segmentation mask output.
Yep totally! In general, language models don't have a clear analysis showing that certain experts specialize in certain skills, but my gut feeling is that you can use analysis similar to the paper below to find some type of specialization.
This was great for me, thanks Cameron, you went ALL out! I’ve invested (year ago) in an MOE network called BitTensor and thought I knew this. Did not but do now. I’m not exactly sure if they’re MOE still, are you familiar with this network, and if so any thoughts on the mechanism underlying it? There are a number of very highly qualified AI groups building on it. I would like to as well but haven’t learnt enough yet.
Yes, very much so, if you find it compelling enough and are available - I’m looking for an AI consultant to hire regarding building my position on the network (fine-tuning, host models).
Thanks for this fantastically detailed write-up!
Since I am from a computer vision background I have seen MoEs used for a different modality than text. I have seen them being used "conditionally fuse" information based on the "quality and content" of various inputs.
Imagine a CNN that does semantic segmentation of a scene with multi-modal inputs such as RGB images, infrared image, etc. The model then learns to "weigh" the output of each modality branch. The weighting is conditioned on the inputs. So if the RGB image is washed out due to high exposure because your RGB camera is facing the sun, the model can give the RGB branch a lower weight and prefer information from other branches to produce the segmentation mask output.
Yep totally! In general, language models don't have a clear analysis showing that certain experts specialize in certain skills, but my gut feeling is that you can use analysis similar to the paper below to find some type of specialization.
https://transformer-circuits.pub/2023/monosemantic-features
That is interesting! Thanks for sharing the relevant paper. :)
Cameron, thiscwas a really excellent overview- shows your impressive command of the materials. Would love to see a book by you on the topic.
Thanks for the kind words. I might try to write a book in the future once I build up enough content on the newsletter to serve as a starting point :)
Absolutely fantastic article, thank you!
Glad you liked it! Thanks for reading
Thank you for sharing this
Of course! Thank you for reading 🙂
Great work
Thanks! Thank you for reading
Great!
Thanks!
This was great for me, thanks Cameron, you went ALL out! I’ve invested (year ago) in an MOE network called BitTensor and thought I knew this. Did not but do now. I’m not exactly sure if they’re MOE still, are you familiar with this network, and if so any thoughts on the mechanism underlying it? There are a number of very highly qualified AI groups building on it. I would like to as well but haven’t learnt enough yet.
I'm not familiar with BitTensor, but it looks interesting!
Yes, very much so, if you find it compelling enough and are available - I’m looking for an AI consultant to hire regarding building my position on the network (fine-tuning, host models).