• 0 Posts
  • 276 Comments
Joined 1 year ago
cake
Cake day: September 9th, 2023

help-circle

  • Dear C-Keen.

    I’m also seekin’. Seekin’ answers!

    I’m curious as to where the line is drawn in mixed cases, since I’m experimenting with both pixel art painting and txt2img/img2img models, and have had thoughts of combining the two to generate extensions of my own works.

    Relevant examples of unclear cases to clarify:

    1. Starting from an original work as the core “seed”, outpainting the world by expanding the canvas around it, continuing the work based on that input.
    2. Same as 1., then manually adding further edits after doing the txt2img outpainting.
    3. Starting from own original non-generated work of art, using some style transfer to generate a much similar edition in a pixel art style.
    4. Starting from own text prompt to generate some pixel art, then manually editing that.

    I understand that this is about appreciating artists. Pixel art is a craft with a rich history, and it’s a dogma, which can be helpful and fun for artists to get going. Generating the works seem meaningless from that perspective, but I’d argue, nonetheless, that all of the examples above (not just any gen. pixel art) are a continuation and natural development of the craft, which has already been changing through the times. From analog embroidery through digital ages of computers and software. Should we keep insisting on crafting the traditional way, or can we use modern tools? How many colors are allowed, if we want to stick to earlier pixel art traditions here?

    As I see it, all of the listed examples require a certain degree of artistic work, and couldn’t have existed without that, but use txt2img or img2img generation as tools as part of the artistic process, experience and output. One could argue that these pieces of art represent the current state of the craft, and that artists working with these tools should not be excluded from here. But on the other hand I fully respect the existence of the opinion that they should, to collect and adore only the 100% manually painted works within this community.

    I find it easy to understand your decision regarding 100% generated pixel art, but the next question that rises is how will you point out the generated ones that are being posted? Except for obviously incorrect stuff like a gradient “pixel” challenging the technical limitations of real works. But, then again, what if an artist manually draws such glitches by faking a lower resolution than the actual image file and breaking it, shouldn’t that be allowed?

    Please don’t get me wrong. I’m not here to oppose any decisions or defy one ruling over another. I just think these questions are interesting to ask. Have you thought of how to actually enforce this? If you don’t want 100% generated works here, how will you make sure to find those that are, and only those? Or if you want absolutely 0% generated pixels in here, how will you find the ones with any at all, like in the five examples I provided? The community hivemind? How to avoid false accusations?

    I’m looking forward to hear your thoughts, whatever they are. Best regards.



  • An inspiring, simple, yet powerful workflow, and I think it lead to a beautiful and interesting result! Lovely pixelart input as well.

    I just started doodling with pixels myself about a week ago in the PixaPencil app, and your work here makes me want to try something along the same lines. I’ve also been thinking of outpainting from a 64x64 or such already, keeping the style consistent, just to see what would appear by expanding infinite worlds from my own small pixelart seeds. Have you tried something like that?





  • pirat@lemmy.worldtoF-Droid@lemmy.mlShowly
    link
    fedilink
    arrow-up
    3
    ·
    13 days ago

    The reason could be that Trakt integrates with media server apps like Jellyfin or Plex, or apps like Kodi, and that you can thereby bring your watch history with you across apps and you don’t lose it if your server crashes, library is corrupted or something… I have never used it, but I’d imagine that’d be a reason to use it. If I knew of a libre alternative, I’d actually consider using it for Jellyfin.


  • pirat@lemmy.worldtoF-Droid@lemmy.mlLauncher Kvaesitso
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    13 days ago

    No idea why, but I don’t see their comments anywhere in this thread. Thanks for confirming.

    EDIT:

    I found this metadata file, is that the one?

    https://gitlab.com/fdroid/fdroiddata/-/blob/master/metadata/de.mm20.launcher2.release.yml

    From the file:

    MaintainerNotes: |- Kvaesitso uses several external APIs for search providers. Several of them require signing up to obtain a developer API key: gdrive search, openweathermap, HERE and Meteorologisk institutt. It’s not possible for users to provide these keys as explained here: https://github.com/MM2-0/Kvaesitso/issues/227#issuecomment-1366826219 If keys are not provided, these features are automatically disabled during the build.

    core/shared/build.gradle.kts and plugins/sdk/build.gradle.kts have configurations in them for publishing artifacts to maven repos. They are not used during the build, but detected by F-Droid scanner anyway. We patch it out from core/shared/build.gradle.kts, since this module itself is still used in compilation, and delete plugins/sdk/build.gradle.kts because it’s not used in app compilation.

    Kvaesitso depended on different libraries used for gdrive login in the past that pulled GMS dependency, however it’s not the case anymore:

    https://github.com/MM2-0/Kvaesitso/issues/583#issuecomment-1775268896 The new libraries pull OpenTelemetry though, but it’s unclear if it’s used (considering gdrive integration is disabled).

    Max heap size is reduced in gradle.properties to avoid gradle daemon being killed by OOM manager.

    Older versions of Kvaesitso had onedrive integration that depended on non-whitelisted maven repos, but it was removed.

    Upstream provides an fdroid flavor, however there’s no difference with default flavor except for different versionName.

    For some reason, F-Droid fails to pick up the correct gradle version from distributionUrl if subdir is used.

    It seems to be the case that F-Droid removes gdrive and onedrive in their build. Though, there seem to be no mentions of Wikipedia.



  • This app promotes or depends entirely on a non-free network service

    When viewing the app in F-Droid, the note below this part tells, that it uses a third-party service for currency exchange rates.

    I don’t know if the fact that it can show Wikipedia results, and that you can connect it to your Google account (to show cloud files from Drive and such in the search results) plays a role too, but it isn’t specifically mentioned under the anti-features… On a sidenote, searching your own Owncloud or Nextcloud is supported too.



  • I just tried this on an Ultra.cc seedbox with yt-dlp installed, and the Fintube plugin configured to the right path for that, yet when I go to Dashboard > Fintube and click the Submit button to add a video to the download, nothing happens. Can’t figure out what’s wrong.

    Maybe Jellyfin doesn’t have the necessary write permissions to write the file to that folder, but I’m not quite sure how to change those on such a seedbox, if that’s the case.

    Any experience with this to share? Would the Submit button usually lead to a different view, or does it just stay on that video submission screen while the download happens silently in the background? The lack of action I experience when clicking it feels a bit awkward…




  • I thought it was supposed to be an infinite amount of monkeys, since it’s known as “infinite monkey theorem”, but apparently, according to Wikipedia,

    The infinite monkey theorem states that a monkey hitting keys at random on a typewriter keyboard for an infinite amount of time will almost surely type any given text, including the complete works of William Shakespeare. […]

    […] can be generalized to state that any sequence of events that has a non-zero probability of happening will almost certainly occur an infinite number of times, given an infinite amount of time or a universe that is infinite in size.

    However, I think, as long as either the timeframe or monkey amount is infinite, it should lead to the same results. So, why even limit one of them on this theoretical level after all?

    The linked study even seems to limit both, so they’re not quite investigating the actual classic theorem of one monkey with infinite time, it seems.