< ^ txt
Fri Mar 8 06:00:01 EST 2024
========================================
Slept from midnight to seven without waking.
A chance of showers early in the afternoon.
Showers early in the evening.
Highs in the lower 50s.
Southeast winds 5 to 10 mph.
Chance of rain near 100 percent.
# Work
* 10:30 AM - 11:00 AM Review "what is IT" write up of test scenarios
* 11:00 AM - 12:30 PM CTO project team deep dive
* 01:00 PM - 02:00 PM IIPA ATO next steps
* 03:30 PM - 04:30 PM IIPA Appian hosting questions
# Home
* [ ] work on DNS redirection POC
* [ ] rebuild personal VM (RHEL EOL June 2024)
* [ ] car oil change
* [ ] schedule dentist appointment
* [ ] schedule optometrist appointment
Read more Cat Out of Hell.
https://techtactician.com/best-gpu-for-local-llm-ai-this-year/#Can_Your_Run_LLMs_Locally_On_Just_Your_CPU
What's in that i9 box I never finished building?
What would we need to do to add a local LLM-capable GPU?
* 500 Watt power supply (80 plus gold).
* ASRock 8560M-HDV motherboard
* 32 GB RAM
That might be OK.
The power supply might be a little too weak.
I guess I could see if I can run a local LLM on my M1....
https://github.com/oobabooga/text-generation-webui
Watched The Gentlemen series on Netflix.
Fun.
This was a Guy Ritchie movie first, apparently.
Servings: grains 6/6, fruit 0/4, vegetables 3/4, dairy 4/2, meat 3/3, nuts 0/0.5
Breakfast: left-over pizza
Brunch: coffee
Afternoon snack: carrots
Dinner: Mexican
< ^ txt