• Что бы вступить в ряды "Принятый кодер" Вам нужно:
    Написать 10 полезных сообщений или тем и Получить 10 симпатий.
    Для того кто не хочет терять время,может пожертвовать средства для поддержки сервеса, и вступить в ряды VIP на месяц, дополнительная информация в лс.

  • Пользаватели которые будут спамить, уходят в бан без предупреждения. Спам сообщения определяется администрацией и модератором.

  • Гость, Что бы Вы хотели увидеть на нашем Форуме? Изложить свои идеи и пожелания по улучшению форума Вы можете поделиться с нами здесь. ----> Перейдите сюда
  • Все пользователи не прошедшие проверку электронной почты будут заблокированы. Все вопросы с разблокировкой обращайтесь по адресу электронной почте : info@guardianelinks.com . Не пришло сообщение о проверке или о сбросе также сообщите нам.

Residue Number Systems for GPU computing as indie-researcher. Thoughts?

Lomanu4 Оффлайн

Lomanu4

Команда форума
Администратор
Регистрация
1 Мар 2015
Сообщения
1,481
Баллы
155
I've been thinking about "Are there analogs to parallel computing rooted in number theory? Like a way to emulate a GPU on a regular CPU, but not through hardware. Rather by replacing GPU threads with concepts from prime numbers and finite field theory?" I know this sounds cookiesque.

However, I care about this question because, in a world where AI is becoming commonplace, being GPU-poor is somewhat akin to being locked out of the future. There must be some way to perform lots of matmuls (or at least an intelligence-evoking amount of muls) on consumer CPUs. Maybe we just haven't invented the right number system. I believe Math is mostly invented, rarely discovered. ie binary 1's and 0's are surely discovered. However, floating-point, and fixed-point, are human inventions tweaked for specific use cases.

In fact, researchers built Residue Number Systems(RNS) in the 1960's as an alt to binary Base-2 arithmetic for wildly parallel computing. However, they fell out of favor because finite field theory (the foundational math for RNS) supports addition and multiplication : neither division nor comparisons are supported. Here's the thing : matrix multiplications are merely mults and adds. RNS is great for general matmuls but fails at stuff like softmax and backprop.

I argue that solving the division, and comparison problem for RNS is the key to solving GPU poverty. RNS's foundational research was done in the 1960's. What if math invented in the 21st century is needed to solve this?.For context, only after the invention of new mathematics in the 20th century did Wiles prove Fermat's Last Theorem.

I know "Talk is Cheap" and that I need to "Be the change i want to see in the world".

So here's everything I attempted with the RNS problem and how it failed :
*Please be lenient with criticism. I'm not VC funded so I do this during my nights and weekends after coming home from my day job.

  1. Finite Field Gemm (Attempting to rewrite backpropagation within a residue number system)

It works in principle but in practice, it involves big integer multiplications and additions. So during the backward pass, the deltas are rather large making it impossible to converge towards a local minima.
Link :

Пожалуйста Авторизируйтесь или Зарегистрируйтесь для просмотра скрытого текста.



  1. Mediant 32 (A fractional number system to get rid of explicit division in RNS) - We can't divide efficiently in an RNS so why not have an explicit integer and denominator? It works extremely well, honestly, I was impressed. The only difficulty is with handling overflow. The fractions tend to become quite large and the absence of division makes simplification difficult. Link :

    Пожалуйста Авторизируйтесь или Зарегистрируйтесь для просмотра скрытого текста.

  2. Discrete Logarithm Neural Networks (A network architecture built around big integers because my biggest challenge was dealing with big ints) - It actually works. I was surprised that it got a 74% accuracy on the Iris dataset. The problem is that training the network involves solving a discrete logarithm problem. So my network weights can only be found by brute force. I don't think I'll find a way to backpropagate. If I do then I would achieve the impossible : solving modern-day cryptography. Link :

    Пожалуйста Авторизируйтесь или Зарегистрируйтесь для просмотра скрытого текста.


Next steps:
I'm looking at the Factorial Number System. The modulo function is pretty well defined and it supports floating-point and reals so I don't have to worry about the RNS division problem.
I bet I can define an RNS using factorials and this will be easy to work with. I started by going down the Enumerative Combinatorics rabbit hole here
Link :[

Пожалуйста Авторизируйтесь или Зарегистрируйтесь для просмотра скрытого текста.

]

Let me know what you guys think.


Пожалуйста Авторизируйтесь или Зарегистрируйтесь для просмотра скрытого текста.

 
Вверх Снизу