Camera, contacts, advertising ID, Google Play billing all seem excessive for offline use, particularly when the aren’t needed for online use. Access to turn WiFi off/on may be needed to go offline via the app, but that also means it can turn on WiFi while offline.
Storage makes sense, given it needs to download the maps for offline use, but the permissions seem to imply full access to all storage.
I’m going off of the screenshot at https://organicmaps.app/
Pretty cool project I’d not heard of. Those offline gps nav permissions seem pretty excessive for a project claiming privacy focus.
The person isn’t talking about automating being difficult for a hosted website. They’re talking about a third party system that doesn’t give you an easy way to automate, just a web gui for uploading a cert. For example, our WAP interface or our on-premise ERP don’t offer a way to automate. Sure, we could probably create code to automate it and run the risk it breaks after a vendor update. It’s easier to pay for a 12 month cert and do it manually.
That’s my hope. Still from where I live I can only hope my specie contributions are used to affect that.
This poll tracking is showing Harris barely ahead on national polls. This millennium, Republicans have won the presidency in 2000, 2004, and 2016.
In 2000 and 2016, the Democratic candidate won the popular vote.
Winning the popular vote doesn’t mean shit. The electoral college is what matters.
That same NYT poll link lists 9 tossup states: Wisconsin, Michigan, Pennsylvania, Arizona, Georgia, Minnesota, North Carolina, Nevada, and Virginia.
You’ll notice all but the first three are in alphabetical order. That’s because all but the first three don’t have enough polling to make a prediction. Of those first three: a statistical tie in Wisconsin and Michigan with a Trump lead in Pennsylvania.
If you include Kennedy, Harris is ahead by 1% in Wisconsin and Pennsylvania but still tied in Michigan.
National polling trends are going in the direction I want, but they really don’t matter.
I write this from a state whose electoral college votes have never gone for a Democrat in my lifetime and won’t ever before my death. I’ll be voting for Harris, but that vote is one of those national votes that won’t actually help my preferred candidate.
The only way I can help is via monetary donation.
And if you’re a Harris voter in a solidly blue state, your vote means as much fuck all as mine does. Yes, it actually makes it to the electoral college, but, like mine, that’s a forgone conclusion. You should be donating money too and hoping it’s used wisely to affect those swing states.
Under the CMB method, it sounds like the calculation gives the same expansion rate everywhere. Under the Cepheid method, they get a different expansion rate, but it’s the same in every direction. Apparently, this isn’t the first time it’s been seen. What’s new here is that they did the calculation for 1000 Cepheid variable stars. So, they’ve confirmed an already known discrepancy isn’t down to something weird on the few they’ve looked at in the past.
So, the conflict here is likely down to our understanding of ether the CMB or Cepheid variables.
Except it’s not that they are finding the expansion rate is different in some directions. Instead they have two completely different ways of calculating the rate of expansion. One uses the cosmic microwave background radiation left over from the Big Bang. The other uses Cepheid stars.
The problem is that the Cepheid calculation is much higher than the CMB one. Both show the universe is expanding, but both give radically different number for that rate of expansion.
So, it’s not that the expansion’s not spherical. It’s that we fundamentally don’t understand something to be able to nail down what that expansion rate is.
As a first book, I think Children of Time is much better than Shards of Earth. I enjoyed both series but would say the third book in each was the weakest. The Final Architecture series had a slightly stronger third entry.
And the article content posted is just an excerpt. The rest of the article focuses on how AI can improve the efficiency of workers, not replace them.
Ideally, you’ve got a learned individual using AI to process data more efficiently, but one that is smart enough to ignore or toss out the crap and knows to carefully review that output with a critical eye. I suspect the reality is that most of those individuals using AI will just pass it along uncritically.
I’m less worried about employees scared of AI and more worried about employees and employers embracing AI without any skepticism.
Thanks. Very interesting. I’m not sure I see such a stark contrast pre/post 9-11. However, the idea that the US public’s approach to the post-9-11 conflict would have an influence makes sense and isn’t something I’d ever have considered on my own.

Protest Songs: why do I feel like there were many more (and many more that were popular) in the 60’s and 70’s?
I’m a guy approaching 60, so I’ll start by saying my perception may be wrong. That could be because the protest songs from the late 60’s and early 70’s weren’t the songs I heard live on the radio but because they were the successful ones that got replayed. More likely, it’s because music is much more fractured than what I was exposed to on the radio growing up. Thus, today, I’m simply not exposed to the same type of protest songs that still exist.
Whatever the reason, I feel that the zeitgeist of protest music is very different from the first decade of my life compared to the last.
I’m curious to know why. My conspiratorial thoughts say that it’s down to the money behind music promotion being very different over those intervening decades, but I suspect it’s much more nuanced.
So, why are there fewer protest songs? Alternatively, why I am not aware of recent ones?
Me too, but I’d put Usenet in there before Slashdot.
Spock, Uhura, Chapel, heck even M’Benga don’t make it a prequel, but a lieutenant Kirk does?
Because most people aren’t technical enough to understand there are alternatives, particularly if those alternatives involve removing a scary label telling you not to.
Which banner do you prefer?
3
The South. Just below Indiana, the middle finger of the South. And I say this as a Hoosier for much of my life.
As a guy responsible for a 1,000 employee O365 tenant, I’ve been watching this with concern.
I don’t think I’m a target of state actors. I also don’t have any E5 licenses.
I’m disturbed at the opaqueness of MS’ response. From what they have explained, it sounds like the bad actors could self-sign a valid token to access cloud resources. That’s obviously a huge concern. It also sounds like the bad actors only accessed Exchange Online resources. My understanding is they could have done more, if they had a valid token. I feel like the fact that they didn’t means something’s not yet public.
I’m very disturbed by the fact that it sounds like I’d have no way to know this sort of breach was even occurring.
Compared to decades ago, I have a generally positive view of MS and security. It bothers me that this breach was a month in before the US government notified MS of it. It also bothers me that MS hasn’t been terribly forthcoming about what happened. Likely, there’s no need to mention I’m bothered that I’m so deep into the O365 environment that I can’t pull out.
Nice job. Packet loss will definitely cause these issues. Now, you just need to find the source of the packet loss.
In your situation, I’d first try to figure out if it is ISP/Internet before looking inside either network. I wouldn’t expect it to be internal at these speeds. Though, did you get CPU/RAM readings on the network equipment during these tests? Maxing out either can result in packet loss.
I’d start with two pairs of packet captures when the issue happened: endpoint to endpoint and edge router to edge router. Figure out if the packet loss is only happening in one direction or not. That is, are all the UK packets reaching DE but not all the DE making it back? You should clearly be able to narrow into a TCP conversation with dropped packets. Dropped packets aren’t ones that a system never sent, they’re ones that a system never received. Find some of those and start figuring out where the drop happened.
Just curious if you’ve had the chance to dig into this and can report anything back?
A blacklist, to keep using the email protocol as example, is a tool used sparingly and only when other filtering methods are unsuccessful or when greater damage is prevented that way.
Have you ever run a mail server? If so, have you looked at your logs? The RBL’s on the managed mail gateway for my work turns away 70% of the attempts. This is even before spam scoring kicks in on the 30% initially accepted. A significant percent of that is considered spam. Email has a complex set of automated tools to reject content without even viewing it.
I still think email, even though federated, is a poor analogy to make for Lemmy.

Teaching children about online manipulation without creating a paranoid world view subject to manipulation?
Ok, this is not going to be a well formulated question, because the concerns behind it are nebulous in my own head.
Some assumptions I have, that clearly inform the question that follows: I believe commercial, state, and others have sophisticated methods of influencing what I see on social media and thus, in part, what I think. I also believe that someone more willing to believe in the types of conspiratorial beliefs I’ve just expressed are more likely to be manipulated by information they’re exposed to. And, yes, I fully appreciate the irony of those beliefs.
My child is adult enough that belief patterns I encourage are very unlikely to become deep patterns. That is, I’d have to work to indoctinate my son, and he’d actively resist if my indoctrination was outside of societal norms.
He didn’t grow up exposed to the social media I suspect children do now.
How does a parent inoculate a child to the influence of social media without also creating a mindset willing to believe in a

Proxmox, xcp-ng, or something else?
So, I’ve been self-hosting for decades, but on physical hardware. I’ve had things like MythTV and an asterisk voip system, but those have been abandoned for years. I’ve got a web server, but it’s serving static content that’s only viewed by bots and attackers.
My mail server, that’s been active for more than two decades is still in active use.
All of this makes me weird in the self-hosted community.
About a month ago, I put in a beefy system for virtualization with the intent to start branching out the self hosting. I primarily considered Proxmox and xcp-ng. I went with xcp-ng, primarily because it seems to have more enterprise features. I’m early enough in my exploration that switching isn’t a problem.
For those of you more advanced in a home-lab hypervisor, what did you go with and why? Right now, I’m pretty agnostic. I’m comfortable with xcp-ng but have no problems switching. I’m particularly interested in opinions that have a particularly negative view of one or the othe

Old school self hoster: scared of the security challenges of modern hosting
TL;DR: old guy wants logs and more security in docker settings. Doesn’t want to deal with the modern world.
I’m on the sh.itjust.works lemmy instance. I don’t know how to reference another community thread so that it works for everyone, so my apologies for pointing at sh.itjust.works, but my thoughts here are inspired by https://sh.itjust.works/post/54990 and my attempts to set up a Lemmy server.
I’m old school. I’m in my mid-50’s. I was in academia as a student and then an employee from the mid-80’s through most of the 90’s. I’ve been in IT in the private sector since the late 90’s.
That means I was actively using irc and Usenet before http existed. I’ve managed publically facing mail and web servers in my job since the 90’s. I’ve run personal mail and web servers since the early 00’s. I even had a static HTML page that was the number one Google hit for an obscure financial search term for much of the 2000’s. The referer ip’s and search terms could probably have been mined for da

Reddit already looks different for me
It’s not even June 12 for me, yet I suspect many subreddits went dark based on UTC.
I moved to Reddit during the Digg migration. Thus, I got the default subscriptions from back in the day. Over the years, I’ve unsubscribed to things I felt were crap, and I’ve added a number of subreddits.
Already, many have gone dark. My old.Reddit.com homepage already looks much different than normal, and I know that a few subreddits that do show have announced they’ll go dark. I assume they are US based and timing that locally.
I’ve spent more time in the Lemmy fediverse than on Reddit since joining, but I’ve spent time on both.
I’ll admit to cynical skepticism of the impact of the darkening. I still don’t think it will make a difference in Reddit policy, but I now believe it will have a larger impact on Reddit traffic than I imagined.
I still expect it to have no change in Reddit attitude or really in Reddit users.

Login problems? Only one active login?
I signed up and am currently logged in via an iPad. I wanted to browse and post on a computer. I’ve tried multiple browsers and incognito modes. With all of them, when signing into sh.itjust.works, I get nothing but the spinning button after clicking login.
I’m not sure if it’s some capacity issue, or if Lemmy doesn’t allow the same user logged in via multiple browsers.
I’m a bit scared to logout and see if that’s the case.
Anyone have any insight?

Some Lemmy Technical Questions
Yes, I’m certain I could final answers to all these questions via research, but I’m coming here as part of the Reddit diaspora. My guess is that there’s a benefit to others like me to have this discussion.
I can vaguely understand the federation concept, the idea that my account is hosted at an individual Lemmy server and that other servers trust that one to validate my account. What’s the network flow like? I’m posting this to the lemmy.ml /asklemmy community, but I’m composing it on the sh.itjust.works interface. I’m assuming sh.itjust.works hands this over to lemmy.ml. How does my browsing work? Is all of my traffic routed through sh.itjust.works?
Assuming there’s a mass influx of redditors, what does it look like as things fail? I’m assuming some servers can keep up under the load and some can’t. If sh.itjust.works goes down under the load, can I still browse other servers? Or, do those servers think I should have some token from sh.itjust.works, because my cookies say I’m