
Nvidia Tesla M10 - good for anything? : r/homelab - Reddit
Nov 16, 2023 · M10 is Maxwell, quite old so lacking features like tensor cores for fast float16 compute, any support for bfloat16/TF32, not great for Plex (super old version of NVENC …
Elon Musk Unveils Tesla AI5: 10x Power by 2025 - Reddit
Jun 21, 2024 · Tesla Inc. is an energy + technology company originally from California and currently headquartered in Austin, Texas. Their mission is to accelerate the world's transition …
Is the nvidia P100 a hidden gem or hidden trap? - Reddit
The Nvidia "tesla" P100 seems to stand out. 16GB, approximate performance of a 3070... for $200. Actual 3070s with same amount of vram or less, seem to be a LOT more. It seems to be …
Regarding NVIDIA TESLA M40 (24GB), is it the same as an RTX
Mar 7, 2023 · Regarding NVIDIA TESLA M40 (24GB), is it the same as an RTX 4090 (24GB) for chat AI? If we assume budget isn't a concern, would I be better off getting an RTX 4090 that …
Tesla K80 uses? : r/homelab - Reddit
Welcome to your friendly /r/homelab, where techies and sysadmin from everywhere are welcome to share their labs, projects, builds, etc.
Reddit - The heart of the internet
May 20, 2023 · So like many of you, I feel down the AI text gen rabbit hole. My wife has been severely addicted to all things chat AI, so it was only natural. Our previous server was running …
Nvidia P40, 24GB, are they useable? : r/LocalLLaMA - Reddit
May 7, 2023 · Given some of the processing is limited by vram, is the P40 24GB line still useable? Thats as much vram as the 4090 and 3090 at a fraction of the price. Certainly less powerful, …
Nvidia Tesla K80 : r/LocalLLaMA - Reddit
Aug 19, 2023 · LocalLlama Subreddit to discuss about Llama, the large language model created by Meta AI.
Couuld llama2 70b be run on a tesla m10? : r/LocalLLaMA - Reddit
Dec 22, 2023 · I am trying to run llama2 70b and use it as a base model for further optimization. Does anyone have any experience using the tesla m10? Would it be…
Worth to get a Tesla P4? : r/homelab - Reddit
I am thinking about adding a GPU to my homelab, mostly to support video processing and to play around with some self-hosted generative AI apps. My current homelab setup limits me to …