Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)J
Posts
0
Comments
115
Joined
3 yr. ago

  • In my area (southeast asia) we use "scan and pay" from our banking app. Merchants will have a qr code printed for us to scan via our banking app.

    they are trying to make the banking app into something like a wechat where you can do a bunch of stuff on it.

    why can't you use a credit card with an NFC chip in it?

    for tap and pay, for some reason I don't like to take out my wallet, fiddle and try to take out the plastic and make payment.

    i use it for transit and shopping. I always worry about misplacing or dropping it. Paying / tapping is often a rushed interaction that happen in crowded place. Right now I bring 2 phones with me. My android daily driver phone and an iPhone. I installed all banking / payment apps on the iPhone. Even if i dropped or misplaced my iPhone, the tap and pay feature is still protected with a lock screen, which the credit card does not.

    ^ also i brought the iPhone because I rooted my android phone. I got tired fighting safety net every week to get the payment apps to work

  • It can be used like a contactless card. On a visa (paywave) / mastercard (paypass) terminal, we can tap out phone to make payment.

    in certain country, there is a scan qr code to pay functionalty.

    there is also a peer to peer money transfer feature

  • ArrrMatey

  • how often

    Huh? Frequently....? When i need to pay for food, drink, shopping, transfer money to my peers, etc.

    Where I live is almost going cashless already. I don't carry any cash with me.

    luckily my banking app still works on Grapheneos now.

  • he wants unconsciousness. let's drug him

  • carrying a laptop won't help. If it helped, I won't mind running a termux os via proot also. I dont mind if it isn't optimised for small touchscreen yet.

    right now most mobile payment options in my country only work with android/ios. My bank requires a phone for its device token (2fa).

    The food and cab ordering platform is also exclusively on mobile only.

    very sad

  • what is the reason for the ban?

  • Hmm, sorry—I could not find any relevant information on this topic. Would you like me to search again or ask something else?


    I asked 3 times with perplexica running qwen 30b. Got the same answer 3 times lol


    trying for the 4th time:

    • Bulk access to data: While the site uses CAPTCHAs to prevent server overload, all HTML pages, metadata, and full files are available for programmatic download via GitLab, torrents (especially aa_derived_mirror_metadata), and a torrents JSON API

      <citation href="https://annas-archive.gl/blog/llms-txt.html">

      1

      </citation>

      .
    • API access: For individual file access, users can make a donation and then use Anna’s API

      <citation href="https://annas-archive.gl/blog/llms-txt.html">

      1

      </citation>

      .
    • Donation incentives: LLMs (and their developers) are encouraged to donate—partly in recognition that many models have likely been trained on Anna’s Archive data

      <citation href="https://annas-archive.gl/blog/llms-txt.html">

      1

      </citation>

      .
    • Enterprise support: Organizations can obtain fast SFTP access to all files in exchange for enterprise-level donations, and can contact the team via the Contact page

      <citation href="https://annas-archive.gl/blog/llms-txt.html">

      1

      </citation>

      .
    • Anonymous donation option: For those who prefer privacy, Monero (XMR) donations are accepted with full anonymity

      <citation href="https://annas-archive.gl/blog/llms-txt.html">

      1

      </citation>

      .

    Citations: [1] https://annas-archive.gl/blog/llms-txt.html

  • There are many flavours and extensions of markdown. For example rendering of a table is not part of the standard. How newlines are handled also differ.

    to have all browsers support this, this need to be made into a web standard. And all agreed by different browser vendors (like chrome, safari and firefox).

    right now this might be best handled by browser extension first.

  • How about giving it a try.

    If you happen to have work that is primarily text based (like programming).

    Pick something that you have expertise on, and see LLM can help to automate it. Usually this requires using LLM with tooling that provides agentic capability (if you are into this, checkout Opencode / Kilocode / Roo code).

    Once you get a hang on it, try to mix it up with skill gap that you are less knowledgeble of, but have an idea of what you are trying to accomplish. This is what fascinate me.

    Don't jump straight into vibe coding or domain with 0 knowledge and expect good and repeatable result (not now at least).

    In my case, I am a Web UI developer, I have quite an extensive knowledge on that. For serious work, I can provide (or get it to reference) my requirement, so that it create close to 85-95% of what I need. I can easily verify and make modification. If not, just delete the entire work, refine my original idea and ask LLM to try again.

    But occassionaly I need to test my work on native iOS and Android device. These are not my expertise and I have no interest in learning them yet. In the past, I will either get help from someone to build a prototype for me, or learn and read docs to create a lower quality outcome.

    Now, i ask LLM to create a prototype in 5 minutes. Then add different scenario and features in the next 5 minutes, allowing me to test my work and iterate much faster. If I really want to do that myself, it may take a day or 2, time that I could have spent improving my own work.

    These are throw away work that will only be used a few times. I don't care about maintaining or debugging it.

    Another case is again report and visualization generation for data. Instead of reading several pages of documentation on how to create these visualization and trying to architect the data flow. Now i just ask LLM to "given this data that shows the relationship between x and y, create a report with visualizatiom focus on xyz".

    Nowasday LLM produce work that already surpass what a junior-mid developer can produce. And if I am not satisfied with the work, I will just delete the entire work and redo. No hard feeling, no need to convince the orher person that their time has just gone to waste.

    The only worry I have now is when people stopped learning because of this. And that companies stop hiring new junior developers.

    Without junior developers, there will be no senior developers. Or maybe LLM will eventually make expert obsolete? Right now I don't think it is possible yet.

  • I can get LLM to write prototypes and demos in the background while I am working on other parts of the code at the same time.

    with the right prompt, I can generate and scaffold documentation pages, which I may not have time to do so.

    Things are happening in the background and I get more done.

    I feel like I am faster?

  • LLM is a subset of ML. Now screen reader also use LLM to describe image for visual impair users.

    Some of these are tiny LLM that run on mobile hardware.

    There are also LLM that specialize in translation (TranslateGemma), specialized in coding (QwenCode/Devstral), OCR (QwenVL), etc...

    I feel that people should chill out and stop these irrational hate.

  • Be the change you want to see.

    Create a new word for it. Spread the love in public.

  • I want color passthrough...

  • iSlop

  • If the black box covers the full text, then no.

    the pixel information is already gone (become black color) before it is passed to the compression algorithm.

  • You can still play through OBS's viewport lol

    in OBS you can scale the viewport to fit your screen.