• 8 Posts
  • 133 Comments
Joined 1 year ago
cake
Cake day: August 15th, 2023

help-circle
  • There are about a million different flavors of how to download and execute a shell script. Regardless, you need to redirect the output of curl into bash with the -s flag. Bash needs to know that it is reading from STDIN.

    Here is an over-thought stackoverflow page on it: https://stackoverflow.com/questions/5735666/execute-bash-script-from-url

    Also, if the script is not being read properly, that might explain the dpkg lock issue. Running two instances of dpkg simultaneously is likely causing that collision you are seeing. (If one instance is running, it will touch a lock file and then delete it when it stops. It prevents “bad things” from happening when two instances of the same app want the same resources.)

    That is odd if your path is broken. It curl should be in /usr/bin and ‘which’ should find it. Are you somehow launching another shell inside a shell? Like zsh inside of bash, or something in that flavor? (In some rare cases, that would break paths and profile configs for your active shell.)

    Regardless of why curl isn’t being found, or only partially found, or something, learn “env”. You need to get a decent picture of what your working environment is and why something as basic as curl “isn’t found”. (‘which’ is about as a baseline of a command as there is.)



  • Fake or outdated info, actually. While this is a small tangent, I make it a habit to review basic, introductory information on a regular basis. (For example, I’ll still watch the occasional 3D printer 101 guide even though I could probably build one from scratch while blindfolded.)

    I have been in IT for a very long time and have branched out into other engineering fields over the years. What I have found, unsurprisingly, is that methods and theories can get outdated quick. So, regularly reviewing things I consider “engineering gospel” is just healthy practice.

    For the topic at hand, it doesn’t take much misinformation (or outdated information) to morph into something absolutely fake, or at best, completely wrong. It takes work to separate fact from fiction and many people are too lazy to look past internet pictures with words, or 15 second video clips. (It’s also hard to break out of believing unverified information “just because that’s the way is”.)






  • All good! It’s the same situation as I described and I see that increasing temps did help. It’s good to do a temperature tower test for quality and also a full speed test after that. After temperature calibration, print a square that is only 2 or 3 bottom layers that covers the entire bed at full speed or faster. (It’s essentially a combined adhesion/leveling/extrusion volume/z offset test, but you need to understand what you are looking at to see the issues separately.)

    If you have extrusion problems, the layer line will start strong from the corners, get thin during the acceleration and may thicken up again at the bottom of the deceleration curve. A tiny bit of line width variation is normal, but full line separation needs attention.

    Just be aware if you get caught in a loop of needing to keep bumping up temperatures as that starts to get into thermistor, heating element or even some mechanical issues/problems.


  • I am curious as to why they would offload any AI tasks to another chip? I just did a super quick search for upscaling models on GitHub (https://github.com/marcan/cl-waifu2x/tree/master/models) and they are tiny as far as AI models go.

    Its the rendering bit that takes all the complex maths, and if that is reduced, that would leave plenty of room for running a baby AI. Granted, the method I linked to was only doing 29k pixels per second, but they said they weren’t GPU optimized. (FSR4 is going to be fully GPU optimized, I am sure of it.)

    If the rendered image is only 85% of a 4k image, that’s ~1.2 million pixels that need to be computed and it still seems plausible to keep everything on the GPU.

    With all of that blurted out, is FSR4 AI going to be offloaded to something else? It seems like there would be a significant technical challenges in creating another data bus that would also have to sync with memory and the GPU for offloading AI compute at speeds that didn’t risk create additional lag. (I am just hypothesizing, btw.)


  • I suppose you are correct. If the bit isn’t structural, it doesn’t need to pass any test for microcracks. If it is structural and it passes testing, YOLO that shit.

    It’s just the core frames that need serious attention though. I don’t think I have been around a single aircraft that wasn’t constantly bleeding some kind of fluid, so everything else not related to getting the thing in the air and keeping it from completely disintegrating while in flight is mostly optional. (I am joking, but not really. Airplanes hold the weird dichotomy of being strangely robust and extremely fragile at the same time.)


  • And there are significant technology differences. The new upgrade will be the B-52J or K.

    Proper aircraft maintenance cycles are intense, so it would surprise me if any of airframes we use now have 1952 original parts. Aircraft are subject to lots of vibration and the aluminum in B-52s will eventually stress-crack because of it. (It wouldn’t surprise me if composites were added in many places instead of aluminum replacements, but that is just speculation.)

    Also during those maintenance cycles, it’s much easier to do systems upgrades since the aircraft is basically torn down to its frame anyway.

    It’s the same design to what we had in 1952, but they ain’t the same aircraft, philosophically speaking.








  • 185C is cold for PLA. It may work for slow prints, but my personal minimum has always been around 200C and my normal print temperature is usually at 215C.

    Long extrusions are probably sucking out all the heat from the nozzle and it’s temporarily jamming until the filament can heat up again.

    Think of the hotend as a reservoir for heat. For long extrusions, it will drain really fast. Once the hotend isn’t printing for a quick second, it will fill back up really fast. At 185C, you are trying to print without a heat reservoir. I mean, it’ll work, but not during intense or extended extrusions.


  • For my applications, quantity is better. Since I do CAD work in addition to 3D scanning with only occasional gaming, I need the capacity.

    While I am 3D scanning, I can use in upwards of 30GB of RAM (or more) in one session. CAD work may be just as intensive in the first stages of processing those files. However, I wouldn’t consider that “typical” use for someone.

    For what you describe, I doubt you will see much of a performance hit unless you are benchmarking and being super picky about the scores. My immediate answer for you is quantity over speed, but you need to test and work with both configurations yourself.

    I don’t think I saw anyone mention that under-clocked RAM may be unstable, in some circumstances. After you get the new setup booting with additional RAM, do some stress tests with Memtest86 and Prime95. If those are unstable, play with the memory clocks and timings a bit to find a stable zone. (Toying with memory speeds and timings can get complicated quick, btw. Learn what timings mean first before you adjust them as clock speed isn’t everything.)