seninha has quit [Remote host closed the connection]
seninha has joined #picolisp
stack1 has quit [Ping timeout: 240 seconds]
stack1 has joined #picolisp
avocadoist has joined #picolisp
aw- has quit [Quit: Leaving.]
aw- has joined #picolisp
seninha has quit [Ping timeout: 246 seconds]
razzy has joined #picolisp
<razzy>
Should I choose arm or x86 for picolisp computing-heavy server?
<razzy>
can picolisp utilize nvidia graphics card?
<beneroth>
use amd64 architecture (64bit)
<abu[7]>
arm or x86: I think it depends on the hardware
<abu[7]>
nvidia: How? Of course you can, with proper libraries
<beneroth>
picolisp itself doesn't run on the graphics card, but you can use picolisp to run code in graphics card, using the usual libraries for that
<abu[7]>
Haha, same answers overlapped :)
<beneroth>
pil64 used to run well on arm, but now with llvm it's probably not such a big difference anymore?
<beneroth>
I'd guess "standard" amd64 architecture (doesn't matter if amd or intel or whatever) is probably best now
<abu[7]>
Yeah
<beneroth>
ARM if you want to optimize the electricity bill
<razzy>
I buy virtual machine from amazon.
<razzy>
electrical bill does not concern me.
<abu[7]>
Perhaps run some tests on both?
<beneroth>
running number crunching on nvidia cards (e.g. CUDA): I don't think someone has published any picolisp wrappers for CUDA or thelike. You could make wrappers using the C++ interface of those libraries, or simply use python from picolisp (with a bit overhead, but only during I/O I would think)
<beneroth>
but running code on GPU is always limited to some specific kinds of calculations, not general programming.
<beneroth>
I also recommend you to run tests.
<razzy>
beneroth: amazon has really powerfull CPUs, it will be some time, before i need GPUs. I was thinking maybe llvm uses GPUs natively
<razzy>
now
<beneroth>
and Amazon: some companies run tests before they stick with a server. Even when renting the same type of server from AWS there are differences on what kind of server is actually used.
msavoritias has quit [Ping timeout: 252 seconds]
<beneroth>
razzy, GPU cannot run all kinds of programs. GPU is optimized for some specific kinds of highly parallel math computing, e.g. vector/matrix calculations. They are used in 3D rendering (video games, CAD) and in machine learning ("KI")
<abu[7]>
razzy, you mean LLVM has direct Gpu instructions?
<abu[7]>
This would not help
<abu[7]>
You would need to program in llvm-ir
<abu[7]>
Better use some libraries
<beneroth>
yes, better stick to standard libraries
<razzy>
abu[7]: tests are good idea, I will think of them
<beneroth>
the big one is CUDA from nvidia.
<beneroth>
AMD is currently pushing a freshly released competitor, but I don't know the name right now.
<abu[7]>
razzy: Which kind of calculations do you have in mind?
<beneroth>
setting up CUDA is also not very easy. I've done it before, it can be tricky.
<beneroth>
but yeah, better use the libraries. Using LLVM means re-inventing those libraries. not worth it unless you have something very specific in mind.
<abu[7]>
T
<abu[7]>
llvm is on the lowest level
<beneroth>
and for start probably easiest is to use python. Python has excellent wrappers of those libraries (python is big because of two reasons: 1) used in teaching 2) they have wrappers for many complex C++ libraries)
<beneroth>
using picolisp to coordinate smaller task-specific python programms and/or using picolisp database for managing some data is probably very effective, probably more effective than using only python.
<beneroth>
razzy, but if you don't use it now, don't bother to much with it.
<beneroth>
and the thing with Amazon servers is that you can change and re-deploy it easily, so I would only pay for it when you use it, not before.
<razzy>
beneroth: yes :)
<razzy>
Thank you all.
razzy has quit [Quit: leaving]
chexum has quit [Remote host closed the connection]
chexum has joined #picolisp
seninha has joined #picolisp
rob_w has quit [Remote host closed the connection]