
White house Tesla auto mall commercial

YouTube Video
Click to view this content.
Maybe when you're closer to adding Friendica I'll return with a similar pitch, but for Friendica accounts.
Technically speaking, from what I've read, Identity+ and Friendica both support SAML as an SSO protocol, but based on the limited documentation of the latter and lack of documentation on the former (you really have to ctrl-F that page to find SAML referenced) it seems a potentially janky integration where users would lose control of usernames/nicknames, and knowing the government (despite me literally advocating for them lol) they'd probably generate a horrible username like a UUID.
But maybe that just means there needs to be a better Friendica SSO authentication addon.
Regardless, the general vibe of a platform like Friendica definitely feels more aligned with my proposal of "one account, one human user" than something like Lemmy anyways, as several other commentators have suggested.
True. And Lemmy is even designed for a mix of bot and human accounts, given the "account is a bot" flag that can be set in account settings.
The question is, how do we ensure the accounts that are not flagged as a bot, are indeed not bots?
Yeah, the vibe I'm getting from the general feedback to this post seems to be "interesting idea, but for something other than lemmy." For the Lemmy platform in particular, my proposal seems a solution in search of a problem.
I do think it’s an interesting concept and would be an interesting experiment on a new instance.
I’m just not sure that’s feasible on this platform. Lemmy is really designed to keep people anonymous.
I was imagining that this kind of verification would be part of account registration. So it wouldn't be like "you have two classes of user account, one has a checkmark or something", but instead "you have one class of user account, and can't log in unless you verify you're a unique human".
Which, yeah, would probably work better on a new instance, so people can choose "this is the server where having an account means I am a real person" vs "this is the server where I stay anonymous to everyone, including site admins". An instance that mixed 'unverified users' and 'verified users' would probably just be hassle with no benefit.
If it was done on a designated instance, I don't think anything would, at a technical level, prevent it from being done on any particular platform (eg. lemmy vs mastodon vs pixelfed). But I'll concede that the design of Lemmy may make it the wrong platform for my proposal.
In a way it feels like twitters verified feature, and that makes me wonder if it would work in mastodon
I agree that it's similar to Twitter's verified feature.
But from what I've seen of Mastodon, Mastodon's verification feature doesn't work like Twitter's - Mastodon just lets you put links on your profile and verify the link, but that's just you proving to Mastodon that you control the domain name. Sort of like getting a TLS certificate from Let's Encrypt, where you just prove to LE that you control the domain.
It's not like a 'verified' status on the account as a whole.
So the way I imagine it, it'd work for Mastodon, but not by creating two classes of users - it'd just work by ensuring all users on the instance as a whole are verified.
What other platforms are Fedecan considering adding, and what sort of timeline do you guys have for your 'next expansion'? I want to say there was a page that listed PeerTube, Friendica, Mastodon, etc. as potential 'future expansions', but I can't find it anymore.
Maybe one of those could be the subject of an experiment like this (and if the experiment were successful, Fedecan could use it as a place for the community to hold votes on the direction of Fedecan, if you ever wanted to formally democratize any particular decision).
Bots haven’t really been a huge issue yet, but it’ll be a Fediverse wide one so we need a solution that would scale like that.
The current standard for Fediverse content moderation seems to be for each instance to manage its own content moderation policies, and each instance defederates / block those few instances that are particularly repulsive to them.
Taking content moderation as precedent for the issue of bot mitigation, the onus of mitigating bots will be on the instance admins, where known bot farms just get defederated.
I’m also not keen on any sort of pii link to our users, even if it’s Canada post holding that data.
A fair concern, but IMO needing something like this is inevitable. Maybe I'm just "early", but I don't think I'm wrong.
If the concern is ensuring each user can't be linked to a specific set of PII, then an anonymous credential system like U-Prove could cryptographically guarantee that each account belongs to a unique real person, without revealing which real person it is.
(Many anonymous credential protocols, including U-Prove, come with 'single-spend' mechanisms that can be used to ensure one user can't get two accounts.)
Basically, with anonymous credentials, you'd end up with two sets of data: One with whatever PII-linkable info Canada Post gave to Fedecan, and another containing the actual user accounts. But (provided users used Tor to prevent IP address correlation) it'd be cryptographically impossible to link the any of the first to any of the second.
They would just come in via other federated instances
True, but it would at least build a reputation of "1 lemmy.ca user = 1 real person".
If we’re not selling user eyeballs or data, do we care if a user maps to a real person?
I'd say yes, we should care.
I'm not on lemmy to chat with bots; I want to know that when someone responds to me, that they're a real person, and that if five people respond to me, they're five different real people, even if I have no way of knowing who those real people are.
I also want people who see my posts to know there's an IRL person behind them and that my account isn't just one sockpuppet of many, though I don't want them to know my IRL identity.
If I wanted to chat with bots I'd just generate an artificial group chat with a few ChatGPT or DeepSeek agents, lol.
How to keep bots out without violating user privacy - something like Canada Post Identity+ ?
The more people use Fedecan services, the more Fedecan will attract bots.
Which means Fedecan will have to do something for users to prove that they are human. When I joined, you guys had a registration prompt with manual review, but I imagine the prompts you gave could be automatically bypassed by an LLM fairly easily.
The naive solution is to do something like collecting government IDs like Facebook tried at one point. But that'll just drive people away who don't trust Fedecan with that info.
What would be your thoughts (admin thoughts, and community thoughts) to implement some 'proof of unique personhood' process with something like Canada Post Identity+? Basically, Canada Post verifies that users are human and is responsible for taking care of PII, and Fedecan just trusts Canada Post to not let the same user register multiple times. If done well, I think 'Canada Post proves that every user account on this site is a

My suggestion for reforming the Westminster System: strengthen national unity by awarding a winner-take-all block of seats to the winner of a national at-large score voting election
TLDR: Score Voting is good.
Canadians want national unity.
The ideal of the Good Parliamentarian claims that politicians should, once elected, represent all their constituents and not just their core base, and that a governing party should, once elected, represent the nation as a whole, and not just their members.
So why is national unity a fleeting thing that emerges only in response to external threats, like American rhetoric about annexation and economic coercion, and why does it dissipate and devolve into factionalism once the threat is resolved (or when political campaigns simply drown the threat out)?
Because the Westminster System, in its present form, is institutionally biased towards division.
There are two reasons:
- Within individual constituencies, a narrow majority of voters is enough to guarantee a win, and
- In Parliament, a narrow majority of constituencies is enough to form government and pass law.
These have a common root cause:
**Acquiring a nar
Lemmy mainline could also skip rich link preview generation if it returns a non-2xx status. Have a few retries if link preview generation fails, and omit the preview entirely if retries are exhausted. I think the Attention Required! | Cloudflare
prompt is associated with a 403 Forbidden
status code.
And also generate some logs of what addresses are being refused, so the Lemmy admins can reach out to the content owners and get their servers unblocked, maybe.
Lemmy CA rich link previews for some posts show the captcha challenge, not actual content
Link posts from thetyee.ca appear to be previewing the Cloudflare captcha challenge "Attention Required! | Cloudflare" prompt instead of actual content.
What it looks like (two examples - this seems to be a consistent problem for thetyee):


What it should look like (a different post):

My guess is that the rich link preview is generated in Lemmy's backend, and Cloudflare thinks that the IP address of lemmy.ca's host is full of bots.
No educated guess on the solution, though, but I'd guess that other Lemmy admins have s