• 0 Posts
  • 20 Comments
Joined 1 month ago
cake
Cake day: December 6th, 2024

help-circle
  • Aceticon@lemmy.dbzer0.comtoTechnology@lemmy.worldThe bots are among us.
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    6 minutes ago

    Just adding to it from the other side (ish) of it.

    The point being that what you describe is a broader phenomenon and that, at least amongst Techies, taking in account the point of view of the people on the other side and chosing objective-oriented language with minimal or no social niceties if you figure out they’re constrained in the time they have for handling messages like the one you’re sending, is something one learns rather than coming naturally.

    Same kind of thing applies, for example, when applying to certain jobs: in your cover letter of even CV you put all the stuff they care about for baseline selection upfront and the kind of stuff that matters “if they’re interested” comes afterwards so that if it’s clearly not a fit people’s time doesn’t get wasted. It’s nice for the people on the other side and, as somebody who has been on the other side, this is appreciated and shows professionalism which will help the candidate out if they do seem interesting from reading that baseline selection info.

    Not the same thing as your specific situation but same pattern, IMHO.



  • Gotta defuse people’s anger and indignation by having a zero-effort-zero-impact action that gives them that sweet, sweet “I did something” feeling of fullfilment, before they do something that can actually force a change that solves their problems, like striking or participating in a violent demonstration.


  • It eliminates the dependency of specific distributions problem and, maybe more importantly, it solves the dependency of specific distribution versions problem (i.e. working fine now but might not work at all later in the very same distribution because some libraries are missing or default configuration is different).

    For example, one of the games I have in my GOG library is over 10 years old and has a native Linux binary, which won’t work in a modern Debian-based distro by default because some of the libraries it requires aren’t installed (meanwhile, the Windows binary will work just fine with Wine). It would be kinda deluded to expect the devs would keep on updating the Linux native distro (or even the Windows one) for over a decade, whilst if it had been released as a Docker app, that would not be a problem.

    So yeah, stuff like Docker does have a reasonable justification when it comes to isolating from some external dependencies which the application devs have no control over, especially when it comes to future-proofing your app: the Docker API itself needs to remain backwards compatible, but there is no requirement that the Linux distros are backwards compatible (something which would be much harder to guarantee).

    Mind you, Docker and similar is a bit of a hack to solve a systemic (cultural even) problem in software development which is that devs don’t really do proper dependency management and just throw in everything and the kitchen sink in terms of external libraries (which then depend on external libraries which in turn depend on more external libraries) into the simplest of apps, but that’s a broader software development culture problem and most of present day developers only ever learned the “find some library that does what you need and add it to the list of dependencies of your build tool” way of programming.

    I would love it if we solved what’s essentially the core Technical Architecture problem of in present day software development practices, but I have no idea how we can do so, hence the “hack” of things like Docker of pretty much including the whole runtime environment (funnilly enough, a variant of the old way of having your apps build statically with every dependency) to work around it.



  • Look for a processor for the same socket that supports more RAM and make sure the Motherboard can handle it - maybe you’re lucky and it’s not a limit of that architecture.

    If that won’t work, breakup your self-hosting needs into multiple machines and add another second hand or cheap machine to the pile.

    I’ve worked in designing computer systems to handle tons of data and requests and often the only reasonable solution is to break up the load and throw more machines at it (for example, when serving millions of requests on a website, just put a load balancer in front of it that assigns user sessions and associated requests to multiple machines, so the load balancer pretty much just routes request by user session whilst the heavy processing stuff is done by multiple machines in such a way the you can just expand the whole thing by adding more machines).

    In a self-hosting scenario I suspect you’ll have a lot of margin for expansion by splitting services into multiple hosts and using stuff like network shared drives in the background for shared data, before you have to fully upgrade a host machine because you hit that architecture’s maximum memory.

    Granted, if a single service whose load can’t be broken down so that you can run it as a cluster, needs more memory than you can put in any of your machines, then you’re stuck having to get a new machine, but even then by splitting services you can get a machine with a newer architecture that can handle more memory but is still cheap (such as a cheap mini-PC) and just move that memory-heavy service to it whilst leaving CPU intensive services in the old but more powerful machine.


  • At some point in my career I worked in Investment Banking making custom software directly for people like Traders (so in the are of IT in that industry that’s called the Front Office)

    Traders have almost no free time, hence no time for social niceties, plus they’re “the business” which is the reason for Front Office IT to exist and for whom it works, so eventually you just have to figure out their point of view and that the only way you can do the part of your work that requires interacting with them (to figure out what they need or letting them know what’s now available for them to use) is to use straightforward objective-oriented talks like that.

    It was actually quite a learning experience for me as a techie to learn how to interact with time constrained people who aren’t going to change to suit you, in a way that best does what’s needed for both.



  • Whilst I agree with you in everthing but the first 2 words of your post, I think this is yet another “look at this cool gadget” post that overhypes something, and that is a kind of spam we get a bit of around here, even if nowhere near the levels of the Elon crap or even just US politics.

    This is especially frustratingfor people who, like me, looked at the diagram they link from their article and found out it’s pretty much the same as a run of the mill breadboard power adaptor with a USB-C connector and a slightly better design than the cheap chinese ones, rather than something trully supporting USB-PD (this thing doesn’t even support the basic USB 1.0 negotiation needed to get more than 150mA when connecting to a proper USB host).

    That the article then mentions a “crowdfunding campaign” for something that a junior EE can design with a bit of datasheet digging, carries a bit of a stink of a cash-grab, so seeing it as spam is understandable.


  • If you look at the circuit diagram in their documentation linked from that article, that thing doesn’t even support USB-PD or even just the USB 1.0 device side of the negotiation to increase the current limit from the default (150mA in USB 3) to high (900mA in USB 3). It will look like it works fine if you connect it to a dumb USB power supply (because those thing don’t really do any USB protocol stuff, just dumbly supply power over USB connectors up to the power source’s limit) but if you connect it to, say, a PC USB port (which does implement the USB host side of the USB protocol), your circuit on the breadboard that worked fine when using a dumb USB power supply with that breadboard adaptor might not work because the current it needs exceeds that default 150mA limit for devices that haven’t done USB negotiation (worse if it’s a USB 2.0 port, as the limit is lower for those)

    This thing is basically the same as the chinese power breadboard adaptors you can get in places like Aliexpress, but with a USB-C connector instead of a Type-A, micro-USB or mini-USB one, plus its better designed (it has a proper Buck Converter instead of a cheap Votage Regulator, plus better power supply filtering and a polyfuse to protect it and the host from current overdraws).

    The headline and the article seriously exagerate this “achievement”.


  • TL;DR - It’s a nice and pretty run of the mill breadboard power adaptor which happens to support USB-C connectors, but the article and its title insanely oversell the thing.

    This is not exact as amazing an achievement as the headline implies since the necessary stuff to talk the to the USB PD host upstream is already integrated so you just need to get a chip that does it (and even without it, you’ll get 150mA @ 5V by default out of the USB 3 host upstream and up to 900mA with some pretty basic USB negotiation in a protocol that dates from USB 1.0 and for which there have long been integrated solutions for both the device and the host sides).

    Further, the converting of those 5V to 3.3V just requires a buck converter or even just a voltage regulator (though this latter option is less efficient), for which there are already lots of integrated solutions available for peanuts and where the entire circuit block needed to support them is detailed in the datasheet for that converter.

    Looking at the circuit diagram for this (linked to from the article), they’re not even doing the USB PD negotiation or any kind of USB 1.0 negotiation, so this thing will be limited to 150mA for a USB 3 host or whatever current your traditional USB power source can supply (as those power sources really just do power supply of whatever amperage they support over a cable which happen to have USB connectors, rather than including a genuine implementation of an USB host with current limiting depending on negotiation with the USB device, so such power sources don’t require the device to do any USB negotiation to increase the current limit above 150mA).

    This is really “yet another run of the mill USB power breadboard adaptor” only the USB plug is USB-C rather than mini-USB or micro-USB (so, a different plug plus a handfull of minor components as per the standard of the circuitry to properly support it), so pretty much the same as the cheap chinese ones you can get from Aliexpress, though this one uses a Buck Converter rather than the $0.1 Voltage Regulator in most of the chinese boards, and actually does proper filtering of power supply noise and proper protection against over current, so it is a quality design for such things, though it’s not really a major advancement.

    Without the USB PD stuff I wouldn’t really say that it brings USB-C Power to the breadboard (in the sense of, as many would expect, being able to draw a proper amount of power from a modern USB 3.0 power brick that supports USB-C), more something with a USB-C connector that brings power to the motherboard, as that connector is really the total sum of what it supports from the modern USB spec.

    What would really be nice would be something that does talk USB-PD to the upstream host AND can convert down from the 20V at which it supplies peak power, so that you can take advantage of the juicy, juicy (oh so juicy!) capability of USB-PD to supply power (up to 100W right now, which will be up to 250W with USB 4), though if you’re pulling 100W (which at 5V means 20A, which is a stupidly high current that will melt most components in a typical digital circuit) from you breadboard power adaptor, then I’m pretty sure magic smoke is being released from at least one of the components on that breadboard and, by the way, you’re probably damaging the power rail of that breadboard (aah, the sweet smell of burnt plastic when you turn the power on for your half-arsed experimental circuit!!!)



  • I have a cheap N100 mini-PC with Lubuntu on it with Kodi alongside a wireless remote as my TV box, and use my TV as a dumb screen.

    Mind you, you can do it even more easily with LibreELEC instead of Lubuntu and more cheaply with one of its supported cheap SBCs plus a box instead of a mini PC.

    That said, even the simplest solution is beyond the ability of most people to set up, and once you go up to the next level of easiness to setup - a dedicated Android TV Box - you’re hit with enshittification (at the very least preconfigured apps like Netflix with matching buttons in your remote) even if you avoid big brands.

    Things are really bad nowadays unless you’re a well informed tech expert with the patience to dive into those things when you’re home.


  • Aceticon@lemmy.dbzer0.comtoMicroblog Memes@lemmy.worldFull circle
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    3 days ago

    Having both a Degree (almost 2 Degrees, since I went to Uni to get one and then changed to a different one half-way, so I’m an EE with part of a Physics Degree) and at the same time being massivelly self-taught because of being a Generalist (to the point that in my career I went down the route of working in that which I learned by myself and did for fun as a kid - computer programming - which was not the focus of either of those degrees), it has been my experience that certain things - mainly the fundamentals - are close to impossible to learn by yourself in a hands on way.

    Further, discovering by yourself the best way do something complex enough to require an actual Process is really just going through the same pains of trying stuff out or limping along doing it in a seriously sub-optimal way as countless people in the Past who battled the damned thing until somebody discovered the best ways of doing it, and worst, you’re unlikely to by yourself figure out the best way of doing it even after years of doing it, especially as discovering new ways of doing things is a different process from actually doing the work - you have to actually take time out from doing the work to try new stuff out with the expectation that you might do a lot of wrong things as you try new approaches all the while not producing any useable results.

    (No matter what, to learn new ways of doing things, you’re going to have to take time out of doing work to do the learning - because it’s pretty hard to figure out or just try out new ways of doing something without making mistakes and mistakes aren’t a valid product of your work - and if you have to dedicate time for learning the most efficient way is to learn from somebody else, which means either a mentor or a teacher)

    It’s not by chance that even before Formal Education was a thing there was already the whole Master + Apprentices way of people learning complex domains (such as Blacksmithing).

    Even with the Internet it’s still immensely hard to learn by yourself complex subjects because:

    • Plenty of things you don’t know, you don’t even know that you don’t know them - in other words you’re not even aware they exist - so you won’t go looking for them.
    • Most of what’s out there is shit for learning. Formats such as Youtube optimize for Entertainment, not Learning, so you’ll be fed by the algorthm countless loud dog and pony shows pretending to explain you things, all with about as much dept as a puddle, whilst the handful of properly deep explanations of things are algorithmed-away because they’re too long and boring.
    • Worst, the most experienced domain specialists seldom have the time or the inclination to make posts explaining certain things, worse so for videos (and from experience I can tell you making a good Youtube video is a lot more complex than it seems until you try it). Further what you tend to see is countless posts and videos of people who learned just enough about a subject to think that they know tons about it (and thus can explain it to others) without actually knowing tons about it - in other words, people at the peak of the Dunning-Krugger curve. In other words, most such “teachers” are just slightly less newbie than you.

    Last but not least, you’re not going to figure out the Fundamentals by yourself. No matter how genial you are IQ-wise you’re not going to, for example, rediscover by yourself the various Advanced Mathematics domains, because that stuff took centuries to figure out by the most intelligent people around, often whose only job was to discover things.

    So yeah, some things can only be learned from somebody else, the bulk of what you have to learn is much faster to learn from somebody else than by yourself, and since Formal Education with professional teachers is way more efficient a process than apprenticing under a Master (plus it is way broader in what you end up learning, though less deep than learning from a master/mentor) that’s pretty much all that’s available.

    Personally I think a mix of formal Education, Mentorship and Self-Learning is the best way to learn complex domains, but it’s pretty hard to find yourself in a position were you get a Mentor and as one who often is one in my area, I can tell you I wouldn’t waste my time mentoring somebody who doesn’t even know the basics (for example, because they shunned formal education) when I could be mentoring somebody ready to directly learn the advanced stuff I know which is what’s worth me spending some time teaching.


  • I use a pretty basic one (with an N100 microprocessor and intel integrated graphics) as a TV box + home server combo and its excellent for that.

    It’s totally unsuitable for gaming unless we’re talking about stuff running in DOSEmu or similar and even then I’m using it with a wireless remote rather than a keyboard + mouse, which isn’t exactly suitable for PC gaming.

    Mind you, there are configurations with dedicated graphics but they’re about 4x the price of the one I got (which cost me about €120) and at that point you’re starting to enter into the same domain as small form factor desktop PCs using things like standard motherboards, which are probably better for PC gaming simply because you can upgrade just about anything in those whilst hardware upgradeability of mini PCs is limited to only some things (like SDD and RAM).


  • Well, I haven’t really made any large wire transfers to accounts outside the EU from that bank in over a decade so can’t really confirm or deny.

    I do know that in past experience with banks in general, the people checking the validity of suspicious transations (and large transfers to accounts outside the EU tend to fall into that classification given the prevalence of online scams from countries were the Law is a bit of a joke) will actually call you, or at least they did in the UK some years ago (pre-Brexit) which was the last time I had experience with something like that.

    (At one point I also worked in a company that made Fraud Detection software).

    Maybe they switched to SMS to save money, I don’t know.


  • Ah, I see.

    Your point is that the use of a secondary channel for a One Time Pass is still an insecure method versus the use of a time-based one time password (for example as generated in a mobile phone app or, even more secure, a dedicated device). Well, I did point out all the way back in my first post that SMS over GSM is insecure and SMS over GSM seems to be the secondary channel that all banks out there chose for their 2FA implementation.

    So yeah, I agree with that.

    Still, as I pointed out, challenge-response with smartchip signature is even safer (way harder to derive the key and the process can actually require the user to input elements that get added to the input challenge, such as the amount being paid on a transfer, so that the smartchip signs the whole thing and it all gets validated on the other side, which you can’t do with TOTP). Also as I said, from my experience with my bank in The Netherlands, a bank using that system doesn’t require 2FA, so clearly there is a bit more to the Revised Payment Systems Directive than a blanked requirement for dynamic linking.


  • It think you’re confusing security (in terms of how easy it is to impersonate you to access your bank account) with privacy and the level of requirements on the user that go with it - the impact on banking security of the bank having your phone number is basically zero since generally lots individuals and companies who are far less security conscious than banks have that number.

    That said, I think you make a good point (people shouldn’t need a mobile phone to be able to use online banking and even if they do have one, they shouldn’t need to provide it to the bank) and I agree with that point, though it’s parallel to the point I’m making rather than going against it.

    I certainly don’t see how that collides with the last paragraph of my original post which is about how the original thread poster has problems working with banks which “require a separate device that looks like a calculator to use online banking” which is an element of the most secure method of all (which I described in my original post) and is not at all 2FA but something altogether different and hence does not require providing a person’s phone to the bank. I mean, some banks might put 2FA on top of that challenge-response card authentication methods, but they’re not required to do so in Europe (I know, because one of the banks in Europe with which I have an account uses that method and has no 2FA, whilst a different one has 2FA instead of that method) - as far as I know (not sure, though) banks in Europe are only forced to use 2FA if all they had before that for “security” was something even worse such as username + password authentication, because without those regulations plenty of banks would still be using said even worse method (certainly that was the case with my second bank, who back in the late 2010s still used ridiculously insecure online authentication and only started using 2FA because they were forced to)



  • Those little boxes are just a bit of hardware to let the smartchip on the smartcard do what’s called challenge-response authentication (in simple terms: get big long number, encode it with the key inside the smartchip, send encoded number out).

    (Note that there are variants of the process were things like the amount of a transfer is added by the user to the input “big long number”).

    That mechanism is the safest authentication method of all because the authentication key inside the smartchip in the bank card never leaves it and even the user PIN never gets provided to anything but that smartchip.

    That means it can’t be eavesdropped over the network, nor can it be captured in the user’s PC (for example by a keylogger), so even people who execute files received on their e-mails or install any random software from the Internet on their PCs are safe from having their bank account authentication data captured by an attacker.

    The far more common two-way-authentication edit: two-channel-authentication, aka two-factor-autentication (log in with a password, then get a number via SMS and enter it on the website to finalize authentication), whilst more secure that just username+password isn’t anywhere as safe as the method described above since GSM has security weaknesses and there are ways to redirected SMS messages to other devices.

    (Source: amongst other things I worked in Smart Card Issuance software some years ago).

    It’s funny that the original poster of this thread actually refuses to work with some banks because of them having the best and most secure bank access authentication in the industry, as it’s slightly inconvenient. Just another example of how, as it’s said in that domain, “users are the weakest link in IT Security”.