• 10 Posts
  • 537 Comments
Joined 1 year ago
cake
Cake day: August 15th, 2023

help-circle

  • (thinks out lound…)

    If you could force different speeds and different voltages, you can make some guesses as to what the cable might support.

    USB packets use CRC checks, so a bad checksum may indicate a speed or physical problem. (Besides stating the obvious, my point is that doing strict checks for each USB mode gives CRC more value.)

    I just looked over the source code for libusb (like I knew what I was looking for, or something) and it seems that some of the driver(?) components hook really deep into the kernel. There might be a way to test specific parts of any type of handshake (for dataflow or voltage negotiation) to isolate specific wires that are bad by the process of elimination.

    I think my point is that a top-down approach is likely possible, but it’s probabilistic.




  • All good! It’s the same situation as I described and I see that increasing temps did help. It’s good to do a temperature tower test for quality and also a full speed test after that. After temperature calibration, print a square that is only 2 or 3 bottom layers that covers the entire bed at full speed or faster. (It’s essentially a combined adhesion/leveling/extrusion volume/z offset test, but you need to understand what you are looking at to see the issues separately.)

    If you have extrusion problems, the layer line will start strong from the corners, get thin during the acceleration and may thicken up again at the bottom of the deceleration curve. A tiny bit of line width variation is normal, but full line separation needs attention.

    Just be aware if you get caught in a loop of needing to keep bumping up temperatures as that starts to get into thermistor, heating element or even some mechanical issues/problems.


  • I am curious as to why they would offload any AI tasks to another chip? I just did a super quick search for upscaling models on GitHub (https://github.com/marcan/cl-waifu2x/tree/master/models) and they are tiny as far as AI models go.

    Its the rendering bit that takes all the complex maths, and if that is reduced, that would leave plenty of room for running a baby AI. Granted, the method I linked to was only doing 29k pixels per second, but they said they weren’t GPU optimized. (FSR4 is going to be fully GPU optimized, I am sure of it.)

    If the rendered image is only 85% of a 4k image, that’s ~1.2 million pixels that need to be computed and it still seems plausible to keep everything on the GPU.

    With all of that blurted out, is FSR4 AI going to be offloaded to something else? It seems like there would be a significant technical challenges in creating another data bus that would also have to sync with memory and the GPU for offloading AI compute at speeds that didn’t risk create additional lag. (I am just hypothesizing, btw.)



  • 185C is cold for PLA. It may work for slow prints, but my personal minimum has always been around 200C and my normal print temperature is usually at 215C.

    Long extrusions are probably sucking out all the heat from the nozzle and it’s temporarily jamming until the filament can heat up again.

    Think of the hotend as a reservoir for heat. For long extrusions, it will drain really fast. Once the hotend isn’t printing for a quick second, it will fill back up really fast. At 185C, you are trying to print without a heat reservoir. I mean, it’ll work, but not during intense or extended extrusions.


  • For my applications, quantity is better. Since I do CAD work in addition to 3D scanning with only occasional gaming, I need the capacity.

    While I am 3D scanning, I can use in upwards of 30GB of RAM (or more) in one session. CAD work may be just as intensive in the first stages of processing those files. However, I wouldn’t consider that “typical” use for someone.

    For what you describe, I doubt you will see much of a performance hit unless you are benchmarking and being super picky about the scores. My immediate answer for you is quantity over speed, but you need to test and work with both configurations yourself.

    I don’t think I saw anyone mention that under-clocked RAM may be unstable, in some circumstances. After you get the new setup booting with additional RAM, do some stress tests with Memtest86 and Prime95. If those are unstable, play with the memory clocks and timings a bit to find a stable zone. (Toying with memory speeds and timings can get complicated quick, btw. Learn what timings mean first before you adjust them as clock speed isn’t everything.)


  • It seems like it would be extremely fast to me. Take a 50x50 block of pixels and expand those across a 100x100 pixel grid leaving blank pixels were you have missing data. If a blank pixel is surrounded by blue pixels, the probability of the missing pixel being blue is fairly high, I would assume.

    That is a problem that is perfect for AI, actually. There is an actual algorithm that can be used for upscaling, but at its core, its likely boiled down to a single function and AI’s are excellent for replicating the output of basic functions. It’s not a perfect result, but it’s tolerable.

    If this example is correct or not for FSR, I have no clue. However, having AI shit out data based on a probability is mostly what they do.