โŒ

Normal view

There are new articles available, click to refresh the page.
Before yesterdayCodeProject Latest postings for The Lounge

Changed state doesn't stick

Microsoft...please fix this. Is anyone else seeing this behavior?

I'm using the "new" Outlook client that is now bundled nowadays with a default Windows 10 installation. It's much better than the previous generic email client that was included, but this in particular bothers me.

The client downloads the content of my inbox. Some messages are marked as read, others as unread. I manually mark everything as read. I don't even shut down the app, I leave it running in the background all the time.

When the mailbox is refreshed, the messages that I explicitly marked as read revert back to unread. It's like it's not even relaying the fact that messages are now read back to the server, so the refresh undoes my changes.

THIS IS ONLY FOR GMAIL ACCOUNTS. I'm also using the same client to fetch accounts from hotmail.com, outlook.com, yahoo.com, and those all work correctly. Only the Gmail accounts exhibit this behavior.

Has anyone else seen this?

[Edit]
I suspect if I go to the actual gmail site and mark the unread items as read, the local account might fix itself (I haven't tried it yet), but the whole point of having a client that can manage multiple mailboxes is to avoid using the web interface (yuck!).

[Edit 2]
WTH...if I shut down the app and reload it, it now remembers I've changed messages from unread to read. So, it doesn't bother persisting that until you shut down, and refreshing the mailbox takes precedence over what the client is tracking...?

Follow-up to that *slow* RAID setup from a few weeks ago...

I mentioned in a thread a few weeks ago I had acquired a Mediasonic HFR2-SU3S2 PRORAID 4-bay external enclosure, and was dismayed at the performance (2.5MB/s over USB3, 5MB/s over eSATA) with a RAID-5 configuration.

There was mention that RAID-5 has significant overhead over other types of RAID. Someone suggested to try RAID-10. I'm "losing" an additional drive worth of space (compared to RAID-5), but with RAID-10, as I'm writing this, performance is currently holding steady at ~100+ MB/s (vs 5, at best).

It blows my mind that performance would be that much different. There's overhead, and then there's 20x slower throughput. The system is otherwise identical - same drives, same enclosure, same cables, same system it's connected to.

This is with full-disk VeraCrypt encryption. Without encryption, RAID-10 showed spikes of up to 160MB/s (but generally holding steady at 120-130MB/s). I'm okay with that.

If there's something else to blame, I'm not seeing it. But right now, I have little choice but to say RAID-5 is a killer when it comes to write operations. Again, I knew you don't get any of that for free, there's some overhead, but I never expected it to reach that level.

If I've reached the wrong conclusion, then I'm wrong. I just see nothing else to blame right now.

And I'm a much happier camper. This is usable.

These mice were supposed to outlast me...

I'm very peculiar about the keyboard and mouse I'm using. Especially for my daily driver.

I'll tolerate anyone's mouse when I have to use their system, and I'll adapt to different keyboards given enough time, but I'm particularly finicky about my mouse. I still think the best mouse I've ever used was Microsoft's old IntelliMouse (their very first laser one, without a ball). Sadly at one point Microsoft stopped making them, and I spent years replacing mice after that, trying to find the ideal one, because I knew the mouse I was using wasn't going to last forever.

Then years later Microsoft revived it as the "Classic IntelliMouse". Tried one - and despite the change in color (and some white LED they somehow felt was necessary to have), it felt and performed the same. So I bought 5. Given the last one I had been using was still functional, I figured 5 would outlast me.

That wasn't all that long ago.

This morning I've opened the box for the last of those 5. What happened to quality control, MS?

The 4 others all developed problems that became severe enough I just moved onto the next one in the pile. Either the right or left buttons became too sensitive (or I had to hammer them hard). Or the on-screen pointer started moving erratically, even though I wasn't even touching the mouse. This time it was the scrollwheel - move down one 'click', it moved up by 3 and then back down by 1. The faster I tried to scroll down, the faster it scrolled up. That sort of thing. Rather infuriating.

After having put up with this for months, this one feels like bliss again. But how long will it last? I checked on Amazon, and it's either discontinued (once again), or people want hundreds of dollars for them.

No, I don't have a question. I'm not even asking for alternatives. "What happened to quality control" was rhetorical. I know what happened. I'm just commiserating.

Mad | :mad:Cry | :((

What sort of performance should I expect?

[tl;dr]: Is RAID5 really causing such a huge performance hit?

I have a system (a Hyper-V VM host) with both eSATA and USB3.0 connectors.

I have a retired set of 8TB drives. I got myself a Mediasonic HFR2-SU3S2 PRORAID 4-drive enclosure, which can use either connector.

I love how trivial this enclosure's RAID setup is. I chose RAID5, so I have a total of 24TB worth of storage. Performance however makes it downright unusable. I could leave my VMs powered down overnight to back them up, but what I'm currently seeing could take days. Backing up a VM while it's running is just not a good idea (I use robocopy) so the VMs have to remain down while backing them up. That's not gonna fly during my workweek.

I made sure that, whether I'm using USB3 or eSATA, the "Better Performance" radio button is selected in Device Manager / Disk drives / [the RAID enclosure] / Properties / Policies.

Write operations hold steady at ~2.6MB/s. Active time is flat at 100%.

Same setup, but using eSATA instead, holds steady at ~5MB/s. Better, but still way below expectations. I'm questioning what my expectations should be.

The OS sees the RAID, not individual drives. On top of that, I use VeraCrypt to encrypt the entire RAID. I understand RAID involves some overhead, especially for Write operations--parity calculations would be done by the enclosure hardware, not my VM host's CPU. OTOH, VeraCrypt also introduces its own overhead, and that would be done by the host's CPU (which holds steady at ~3-4% when copying, so that's hardly the killer).

Before I got the RAID enclosure, I backed up the VMs onto a single external disk over USB3, and there was always plenty of time to do the whole thing overnight. I forget what I got in terms of transfer rate, but I'll be sure to pay attention the next time I do it - surely at least 10x the current performance. That single disk is also encrypted with VeraCrypt, so--unless I'm missing something--the only thing left that can account for the difference in transfer rate is the fact that the target drives are set up in a RAID, as opposed to transferring to a single drive.

My (somewhat rhetorical) question is: Really?

Does my diagnostic make sense? Is the fact that I'm backing up to a RAID the real performance killer? Everything is otherwise the same - both the RAID and my single external drive are connected via USB3, and using VeraCrypt.

Does it make sense at all that RAID5 kills performance to the extent I'm seeing?

What would you expect with a setup like this? I know I'll never get close to USB3's theoretical maximum throughput, but this is insane.

[The RAID isn't indicating any failure, and the last time I've used the drives individually, they were all working fine]

Tracking your old car...

I have a vague impression I posted about this a while ago, but couldn't find the thread.

My dad replaced his car with a newer one - not because it was getting old, but because the dealer he bought it from offered him a decent enough amount of money for the year-old one he traded it in for. Keeping up to date with the current year-model made sense.

The car maker has an Android app that gives you access to a bunch of stats and functions, like remotely locking/unlocking the doors, turning on the headlights, opening the trunk, honking the horn, starting the engine, showing its current location on a map, etc.

My dad, not being a smartphone guy, just traded in the car, but didn't reconfigure anything in the car or in the phone app when he did so. I pointed out a few weeks ago that neither did the dealer, or the guy who bought the 'old' car, so his phone was still capable of reporting back how much gas was left in the tank, unlocking the doors, and, well, determining exactly where the new owner lives, or works, or where he happens to be at any moment. Which can lead to nasty stuff if you ask me.

Try as I might, I couldn't even find a place in the app where I could register a new vehicle - let alone removing the old one. I'm not exactly a luddite; I've been earning a living as a software developer for nearly 30 years, and if I can't find what should be a basic option...someone's seriously dropped the ball.

Still, in spite of this, you'd think a dealer selling a used car would do his best to delete any of the previous owner's data, unlink the app from the vehicle, etc...whatever's necessary. But apparently, no. I wouldn't be surprised if the new owner could drive by, and open my dad's garage door as he had it programmed in.

Worse scenario: If the new owner parks it in his garage, connected to his house, one could remotely start the engine every half hour and asphyxiate someone overnight.

It annoys me to no end, knowing these possibilities, that dealers could be so careless. This isn't even a hack, it's working as designed.

Amazon might as well take that feature away

Filtering on some properties, I mean.

I'm looking at hard drives, and I can set filters for drives of various capacities, but it tops off at "8TB and up". Thing is, they have 10, 12, 14, 16, 18, and 20TB drives now (and probably larger ones too), so even when selecting the highest capacity option, there's still a lot of stuff I'm not interested in coming back for me to wade through.

Same with NVMe drives; it tops off at 1TB, but there also are 2, 4 and 8TB drives.

I realize drives of certain sizes might not have existed when they wrote the code behind this...but how long should it take Amazon to update their filters to add those capacities?

Think the same problem exists when it comes to pant sizes?

MS can't catch a break when it comes to search

There's yet another article on Slashdot claiming Bing's market share has barely budged despite ChatGPT being added to it, and initially being a big hit, yada-yada.

And yet barely 2 days prior, another article read, "Google Search Really Has Gotten Worse, Researchers Find". I've been seeing those sorts of articles time and again over the last few months. I can't say whether that's true. I try to do as little as possible with Google nowadays.

I've actually been doing my searches through Bing for a few years now, using Edge (since it moved to the Chromium engine), and frankly, it's a rare occurrence where I can't find what I'm looking for within the first few results of a query. Generally if I can't find anything on Bing, Google really doesn't fare quantifiably "better".

And MS already has all the information it wants from me, if not through my search queries and browser telemetry, then through telemetry sent by the OS itself. If Google's search results truly are getting worse (not my claim), I'm not sure why they should still be in my life. Frankly if my data's gonna get collected anyway, I'd rather have one company have it then two. Especially when one of them is purely an advertising company (whereas the other is merely trying to move in that direction, despite failing repeatedly - and the company's very survival doesn't rely on its ad department).

Nobody it seems ever misses a chance to ridicule Edge or Bing. Laugh all you want, I'm actually rather happy with how it's working out for me.

Obviously, YMMV. So what say you? Are Google's results getting as unusable as some claim? If so, why stick with it?

PC obituary

So 2 days ago, at roughly 4:00am, the PC I've been using essentially as a poor man's NAS started to make some loud noises...an Acer easyStore H340. I could've sworn I got it in 2007, but the earliest discussions I can find right now that mention it go back to 2009. This goes back to when MS was trying to push the idea of having home servers. I remember it came with Windows Home Server 2007, which was based on Server 2003. There was also a Home Server 2009, based on Server 2008, but I just blew it away and installed Windows 7.

Anyway. Turns out it's the tiny fan in their proprietary PSU. I probably could find a suitable replacement, but...after 14-15 years, I figure, it's time to let it go. It's been running 24/7 all that time.

It's dog slow, running one of Intel's earliest Atom CPUs, and 2GB of RAM. But, its only job was to provide network shares; who needs a fast CPU and tons of RAM for that? It's got 4 front-loading bays - you can just slide drives in and out. I bought it, as mentioned, as a poor man's NAS. Absolutely nothing fancy, but just to share files...it did the job nicely enough.

In the end, I never really did take advantage of the multiple front-loading drive bays; whenever the data drive got full, I replaced it with one of larger capacity. I can't remember the size of the first data drive I had initially put in it, but the current one is 16TB. I just moved it to another PC in an external enclosure and re-set a few shares (so other systems using it, except for changing the name of the host PC, aren't even aware of anything being different). But I'll probably move it as an additional internal drive in one of my full-tower systems. I don't trust USB not to randomly lose its connection, even though I'm currently going directly from the enclosure to a PC (and avoiding additional chained USB hubs and such).

Normally I try to keep old PCs going for as long as I can. Right now, it's just suddenly gotten too noisy to ignore. I suppose I could put it in another room (where my main, just-as-noisy VM host currently sits), but I figure it was time to let this one go.

What's your oldest still-in-use PC's war story?

The KISS principal really applies to networks...

I know enough about networking--clearly enough to be dangerous--but not enough to resolve all problems. My network configuration started simple, but grew in complexity over time (years in the making). Trying to reconfigure everything all at once just proved to be too much.

I recently switched ISPs (there's a long and sad story that goes with that, which I won't get into), and I had to put the KISS principal into practice.

The theory was that I'd only have to disconnect the wire from my DSL modem (which went into my router) and hook it up to the new provider's router.

A full day later, I:

a) removed 2 routers, both providing wi-fi
b) had the new provider's router bypassing the router and going directly into a switch
c) removed a pair of Ethernet-over-powerline adapters altogether
d) replaced one of the routers with a second switch
e) ran a cable between both switches
f) got Pi-hole out of the equation

This means the ISP's router is now doing all the heavy lifting (whereas it used to be my own router's responsibility), including wifi, which means I'm now more at the mercy of that one router than I've ever been. But the rest of it is comparatively soooo simple...

The complete saga is just way too long to get into in detail. Suffice it to say that having multiple routers on the same network is just going to end up badly, with each router trying to assert itself as being in charge of everything, and it's a fight to the death. Do that over wireless on both ends, and that's just a recipe for disaster.

At the very least I want to eventually re-introduce Pi-hole, as I've now been reminded just how bad some pages are without some serious ad-blocking. But I've been seriously burnt this weekend, and I want to take it a step at a time.

Converting old DVD rips...

Years ago I ripped most of my purchased DVD collection, to VOB files in AUDIO_TS and VIDEO_TS folders, so if I were so inclined to re-burn them to DVD, these would be compatible with "regular" players.

They're taking quite a bit of room however (4.37GB for a full single-layer disc, twice for dual-layer), and I know the h.265 codec is a lot more efficient than the old MPEG-4 used by DVDs.

I know there's a lot of programs that probably can convert those, even though h.265 is really designed for higher-resolution videos. My problem is trust - can I trust that the conversion will be done correctly? What I mean by that, is that I've seen conversions where audio and video would slowly start to drift, so much that by the time a 2-hour movie ends, the audio is "off" by a few seconds with the video being played back. And I know the source is okay. Worse, if I quickly jump to various parts in the video, this is NOT apparent, so the only way I can really tell whether a converted video suffers from this is to watch it from start to finish.

Obviously I don't want to do that with a few hundred discs.

Has anyone done this sort of conversion before (specifically, with the h.265 codec), and can vouch that the program used does NOT introduce this sort of problem?

I don't really care about preserving menus, extras, subtitles, alternate audio tracks, etc. If I can end up with a single, much smaller .mkv or .mp4 file (over a set of 4+GB folders with multiple VOB files), I'll be happy.

Revisionism in journalism

I've been subscribed to the BBC World News RSS feed for years. Yes, RSS feeds are still a thing. I wouldn't have it any other way either, given that nobody places ads on those feeds (or perhaps rather, don't bother to, for some reason). But that's not the point.

I've been noticing for quite a while now that they'll often re-surface old articles - days, weeks, even months old - articles they've already published before, but republish them with updated bits and pieces - adjust some numbers, add some details that weren't there before, that sort of thing. Some of these (the same articles) show up repeatedly time and time again.

I never find these to be of particular interest (no matter what got updated), so I just delete these "new" entries that show up as unread at the bottom of the chronological list.

I really wonder who those updates are there for. RSS has fallen out of favor, so very few people should even notice. I can only assume that, among the population at large, only people searching for an article on a specific topic might find them, and read the latest version as if it were the first published instance (and really, how might one even know, unless they're marked as such, which they never are?) What's the point? After a while, if something's really worth bringing up again, doesn't it warrant having a brand new article written instead? If it's not, then presumably you're concluding people shouldn't care enough, so as a reporter, you should just let those old articles go...

I don't like to see history rewritten. If it has to do with fact-checking, or new details having come to light, I've seen newspapers publish follow-up articles, corrections as part of an addendum, that sort of thing. These online articles however don't get an addendum; the original gets modified and then passed off as if these were "as originally written".

I'm not sure whether this is common and other news sources do the same, as this is the only news feed I subscribe to. And they're the only ones who do it.

Anyone know anything about journalism that can shed some light as to what the real motive might be?

I'm sure I'm reading too much into this, as the topics in those revised articles are generally rather benign.

Linux, why do you keep disappointing me?

I thought I'd finally have a reason to have a machine running some version of Linux on bare metal, and not in a VM. But nope, still found some show-stopper that sent me right back to Windows.

I bought a 5-bay USB-C hard drive enclosure. I thought I'd dedicate a machine to run TrueNAS, and put some of my smaller(-ish)/retired drives to use again in a software RAID configuration.

Apparently I had silly expectations. Software RAID over a USB connection is "just not reliable enough", so TrueNAS doesn't support it. Only one of the drives is showing up in the web-based admin UI. Supposedly you can drop to a command prompt and build the drive pool from there, but (a) they strongly recommend against it and (b) if you subsequently keep using the admin UI to manage it, you risk breaking things. And "breaking things", when it comes to a RAID configuration, usually means very, very bad things. So that's a non-starter for me.

I thought I had done my homework; people rave about TrueNAS; it's described as professional-grade, yet user-friendly and (bonus) open-source. I had come to the understanding you could throw just about anything at it, and it'll work. But reality is, 10 minutes after a fresh install, this is where I found myself.

Yet puny, crappy Windows sees all drives, and its decades old Disk Manager will dutifully create a software RAID out of them without a complaint, or warning.

I want to like Linux. I really do. I want to run it on a system and have it be useful. I've installed dozens of distributions on VMs, but still haven't found enough of a use for any of them to have an actual physical machine committed to running it natively. I thought this would be my way in. But no, it knows better than me and won't let me do it. I thought that was Apple's thing.

[/rant]

I thought it was a cardinal sin to force a server to reboot...?

...hence why I used Server 2022 when I built my latest dev VM. I got tired of finding out my previous Windows 10 dev VM had rebooted right after Patch Tuesday.

But no, my dev machine rebooted last night at 00:45am. Lost an awful lot of context.

Meanwhile, the VM host, running Server 2012 R2, back when it was still supported and getting updates, would patiently wait for months if I just let it.

Surely server admins aren't putting up with this. Surely MS hasn't changed the default behavior so a server OS can now reboot if it just feels like it.

What people search for...

So The Verge has an article[^] right now from the Google trial that's currently underway. They included a sample of "the most lucrative search queries" for a given week. They (well, Google) claim that, based on the sample week for September 22nd 2018, the top queries ordered by revenue consisted of:

iphone 8
iphone 8 plus
auto insurance
car insurance
cheap flights
car insurance quotes
direct tv
online colleges
at&t
hulu
iphone
uber
spectrum
comcast
xfinity
insurance quotes
free credit report
cheap car insurance
aarp
lifelock

If that's the case, man, I guess I just never learned how you're supposed to use Google, and they must hate my guts. For the most part, these can all be categorized as "things people search for because they're buying something".

Honestly, I never use Google for those types of things. My searches are way more arcane. I'll google for some factual tidbit someone mentioned that I want to know more about - generally the answer will lead me to Wikipedia types of sites. I'll google for the documentation or sample usage for some obscure API I'm trying to use. Generally, I'll end up at learn.microsoft.com/[some SDK page], or here on CP, or StackOverflow. Again, little chance for anyone to monetize anything.

I understand how the article's list makes sense, in that these are the "most lucrative" queries. Mine aren't. All these years, I can't think of many queries I might've submitted to Google that might be candidates for that list. I guess I just never search for "consumer stuff".

Just to be topical, if I was looking for the best price for snow tires, I'd bring up the sites for local stores that I know sell tires, and do the search on there, because I'm not gonna buy snow tires from a store from another continent, even if they have the best price in the entire world. And even if I search Google to find a store's site (because it happens to be less straightforward than [storename].com or .ca), Google still has no idea what it is I'm going on that site to search for. Sure, Google searches can be geo-located, so it might only return results from "local" stores, but my ISP is in another city altogether, so as far as Google knows, those "local" stores it's giving me results for are hundreds of kilometers away.

What do you say, do you use Google "as a consumer", in a way that makes them money because of an eventual purchase, or do you stick to "things that don't have a price tag associated with them"? I get the impression if I was the typical Google user, it never would've gotten off the ground...

Multiple displays, with primary being HDMI

My primary monitor is connected via HDMI.

Secondary monitors are connected via VGA and USB-to-VGA.

Am I right to conclude that if you have a mixture of HDMI (primary) and VGA (secondary), when you power off the display connected via HDMI, its windows all move to the other displays? Or rather, the windows stay in place when the display is powered off, but move around to other monitors when the one connected via HDMI is powered back on...

It's been like that for years, it seems, but I don't believe I've ever solicited feedback on this topic. The PC on my desk has changed over the years, so it's not system-specific (although I could just be unlucky).

This is getting annoying. As more and more display devices switch to HDMI, I suspect this is only going to get worse, unless Windows stops messing around like this.

Your browsing data, in a single archive...

This page presents a "Create an archive" button to download all (?) the browsing data Microsoft has on whatever account you use to log into that page.

The .zip file produced (in my case) contains 4 files:

BrowsingHistory.csv
ProductAndServiceUsage.csv
SearchRequetsAndQuery.csv
UserVisitLocations.csv

Clearly this is private information (and I'm sure MS pinky-swears is only accessible to yourself), but it is rather interesting information.

I've seen scripts before that will export similar data from your local browser (reading it out of local files), but the benefit of getting this data from that location is that the local browser only knows about, well, what you've browsed locally. If you browse from multiple devices, a script to extract that data will only show the subset that was browsed from that device. Whereas the web site above is device-agnostic - this is everything collected across the board.

(Unless that data all gets included when you allow your browser to sync settings - I don't know whether that's the case).

Regardless - my question is:

Has anyone ever written a utility to slice and dice that data? I honestly don't have a specific usage scenario, but I'm thinking this could be interesting. It's trivial enough to parse (it's all CSV, and all columns are self-explanatory), but I wonder if anyone's already put something together to present this in interesting ways...?

I'm otherwise tempted to automate the data retrieval, dumping this into a database, and building reports around this...but as mentioned, I don't have a specific usage scenario right now, and frankly I kinda suck at creating compelling reports, Power BI-style.

In-place OS upgrade - Linux vs Windows

First off, I'm not a Linux fanboi. I like to tinker with it, I've played with countless distributions, both mainstream and obscure, and have built more Linux VMs than I can remember.

For the first time ever, I'm doing an in-place upgrade right now, of Debian 11 to Debian 12, on a system I'm actually using (hosting Pi-Hole - and that's it). About a total of 8 commands, waiting, a reboot, then all good to go. Actually I'm not sure a reboot will even be necessary; I'm currently still on the waiting phase as packages are being installed...

I don't know, I can't quantify it--but I can't shake the feeling that an in-place Linux upgrade leaves the system in much better shape than an in-place Windows upgrade has ever been able to do.

Maybe it's the placebo effect. But I always feel dirty upgrading Windows, in that there's probably gigabytes worth of crap the upgrade leaves behind, that Windows has no means of thoroughly cleaning up. Yes, it keeps a WINDOWS.OLD folder, and yes, it will eventually delete it on its own over time...but it still leaves me with a nasty feeling that Linux doesn't. It's not just the disk space, but probably some stuff left running, or badly configured, that can only be avoided by wiping/repaving.

After many bad experiences over the decades, I always do clean installs of Windows. I just can't bring myself to fully trust it, even if the upgrade is entirely successful.

Am I imagining things? Is Linux truly more apt (pardon the pun) to do a better job of not leaving unnecessary crap behind?
โŒ
โŒ