• 1 Post
  • 21 Comments
Joined 6 months ago
cake
Cake day: November 20th, 2024

help-circle
  • wjs018@piefed.socialtoTechTakes@awful.systemseating our own dogshit
    link
    fedilink
    English
    arrow-up
    16
    ·
    edit-2
    2 days ago

    I just looked at the first PR out of curiosity, and wow…

    this isn’t integrated with tests

    That’s the part that surprised me the most. It failed the existing automation. Even after prompted to fix the failing tests, it proudly added a commit “fixing” it (it still didn’t pass…something that copilot should really be able to check). Then the dev had to step in and say why the test was failing and how to fix the code to make it pass. With this much handholding all of this could have been done much faster and cleaner without any AI involvement at all.



  • I do think that this looks better than last time, especially the backdrop. Also, it’s clear that you cleaned up the dust/fuzz a bit to make the instruments pop a bit more.

    If you want to go for a darker black background, you are going to have to either get comfortable with post-processing your raws or shooting with manual settings (probably some combination of both). You have the advantage while shooting still-life that your subject is not moving and you can take as many shots as you want to really dial in settings.

    The reason that your black sheet ended up looking gray is that the auto setting is usually trying to normalize the average brightness to somewhere in the middle of the sensor’s range. This works in reverse too. If you had a bright white background, the auto settings would adjust to make that end up looking duller and grayer. If you are serious about getting black backgrounds still, then your best bet would be a sweep made of matte black paper. When I want to have a very white background, this kind of paper is the best there is.

    Moving away from auto settings means that you are going to be playing with three settings for the most part (sometimes referred to as the “exposure triangle”). It’s really not that scary, and all the settings are basically different ways to control how much light you have in the image. I have written super basic summaries below, but I recommend finding some other reading or videos where they can show images/examples of how these things influence your photographs.

    Shutter speed/exposure - the amount of time that the sensor is exposed, usually expressed in some fraction of a second. For example, a shutter speed of 1/100 means that the sensor is only collecting light for 1/100th of a second. Some important shutter speeds to know are that 1/100 is about as long an exposure as I can set and get reliable shots without stabilizing my camera in some way (tripod or similar). Also, 1/200 is the fastest shutter speed you can set and still get reliable results while using a flash. Otherwise only part of the picture will see the subject illuminated (unless you have HSS which kinda sucks but is a whole can of worms). In summary, a longer exposure/slower shutter speed means a brighter image because you are collecting light for a longer period of time.

    Aperture/f-stop - this is telling you how wide open the aperture is to let more or less light through the lens and onto the sensor. This number is confusing for a while until you get used to it in that it is expressed as a fraction like f/2.8 or f/4. The larger the denominator, the less light that you let in. So, f/1.4 lets in more light than f/4. This lets you influence how much light you collect independently of the shutter speed. So, if you are shooting something like a sports game with quick moving subjects, you might want a fast shutter speed, but to collect enough light for a good image, you can open your aperture way up.

    Aperture also controls one other important part of your image, the depth of field. The more open your aperture is (smaller the f-stop number), the narrower your depth of field becomes. So, lots of photographers that do things like portraits like to shoot with lenses that can have very wide apertures. This makes the subject (person) be in focus while the background melts away as a big blurry blob of bokeh. It helps separate the subject of the photo from potentially distracting things in the background. Alternatively, shooting something like a landscape often means that you want both the bushes near you and the mountains in the distance to be in focus at the same time. So, that means you need to shoot with a stopped down aperture. For the product photography that I do, I tend to shoot with a very narrow aperture so that the foreground and background parts of the product are in focus.

    ISO - This is basically telling the camera how much to amplify the light that hits the sensor. So, turning up the ISO means that you don’t need to collect as much light since your camera amplifies the light that does hit the sensor, but it also means that you see much more noise in your image due to this. Cranking up the ISO is pretty common when shooting subjects at night or in dark settings (concerts, etc.). It is also another dial you can control to adjust the amount of light you need to collect to get a good image. So, if your aperture is all the way open and you can’t increase the exposure any more without a tripod, but it is still too dark? Then you turn up the ISO. This is usually the last thing I turn to when I need more light in my image as I really don’t like digital noise in my images. I try to keep my ISO <800, but this can vary a lot depending on the camera/sensor.


  • Is this with a flash or with continuous lighting? The reason I ask is that I do product photography for my wife’s business and I found it much easier to do with flash. There are a couple benefits I found:

    • There is just a ton more light to play with. This lets you stop down the aperture and increase the depth of field in your shot. Looking closely at your shot, the front of the spyglass and the rear of the cases are out of focus, something that you could fix by stopping down to 8.0 or so (where I do most of my photography these days).
    • Related to above, the extra light lets you shoot at a lower ISO. Your shot has lots of noise in it which tells me that the ISO is pretty high. I typically shoot at 200 or so, something that would be impossible without the all the light from a flash. Also note that this could be due to compression that happened when it was uploaded to lemmy, so if you don’t see this in your version, then you can disregard.

    I think you have done a great job with what I find to be the hardest part of product photography, the composition. In other words, the products are all laid out in an interesting and appealing way. I find that side of photography really tough and it is the part that is hard or impossible to teach if you don’t really have an eye for it.

    If I were to offer some constructive criticism, its that the backdrop is not the most flattering. The uneven texture emphasizes the noise in the image, and the seam cutting through at an angle doesn’t really add anything and can be visually distracting. Similarly, some of the billowyness of the backdrop is creating light/dark areas that don’t add to the composition.

    The types of products I shoot are not metallic, so I don’t have to worry about specular reflections, but it is something to keep in mind when photographing subjects like this. For example, the smooth metallic surfaces of these instruments really make things like dust/fuzz stand out. As an example, see the dial around and central bottom surface of the instrument on the left. This can be an aesthetic choice though (it almost look cobwebby), so that is why I just say to keep it in mind.

    Great work! Keep it up!





  • I have a PhD in physics, primarily working on fluids and now I work in industry on fluid dynamics. Having just read the abstract, I can already tell that this paper is one of those that borders philosophical about the author’s view of their field. Nothing wrong with that though as we physicists tend to wax poetic from time to time.

    The question about when we can consider turbulence solved is an interesting one. I still work in the field and for most useful applications of fluid dynamics, I would consider it a solved problem. Not to say that the NS equation is solved analytically, but rather that the field has built up a toolbox of phenomenological models and CFD systems that are more than good enough for the range of scales that we typically work with. The bigger problem for CFD in this space is optimization, an issue where GPUs have proven to be invaluable. Only in the past couple years have the major CFD software packages started supporting GPU computation, speeding things up 2-10x depending on the specifics.

    I think that turbulence is an issue really at the extremes of scales at this point (very tiny, very large, small dt, hypersonic, etc.). Also, I think that it would be difficult in a system with complex forces acting on your fluid, like in a plasma where E&M forces are so significant. So, good luck all you folks working on fusion reactors!


  • I’m not a biologist, but if I understand your question correctly, you are basically looking for land-based invertebrates that also lack a hardened exoskeleton (like insects). This would basically consist of small, soft animals like snails, slugs, leeches, tardigrades, and tons of different types of worms.

    The reason that you don’t see large examples of this in land-dwelling creatures is that skeletons or exoskeletons become way more necessary without a medium like the water in the ocean to help support a body. The rigid structure provides an attachment point for musculature to create the mechanical levers we use to manipulate our limbs.




  • The theory that the lead maintainer had (he is an actual software developer, I just dabble), is that it might be a type of reinforcement learning:

    • Get your LLM to create what it thinks are valid bug reports/issues
    • Monitor the outcome of those issues (closed immediately, discussion, eventual pull request)
    • Use those outcomes to assign how “good” or “bad” that generated issue was
    • Use that scoring as a way to feed back into the model to influence it to create more “good” issues

    If this is what’s happening, then it’s essentially offloading your LLM’s reinforcement learning scoring to open source maintainers.


  • Really great piece. We have recently seen many popular lemmy instances struggle under recent scraping waves, and that is hardly the first time its happened. I have some firsthand experience with the second part of this article that talks about AI-generated bug reports/vulnerabilities for open source projects.

    I help maintain a python library and got a bug report a couple weeks back of a user getting a type-checking issue and a bit of additional information. It didn’t strictly follow the bug report template we use, but it was well organized enough, so I spent some time digging into it and came up with no way to reproduce this at all. Thankfully, the lead maintainer was able to spot the report for what it was and just closed it and saved me from further efforts to diagnose the issue (after an hour or two were burned already).




  • wjs018@piefed.socialtoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    1
    ·
    3 months ago

    You have clearly never driven on 93 through Boston where the person you replied to said they are from (aka the Big Dig). It is basically an entire highway that is underneath the city. There are many on and off ramps, lanes suddenly become exit only, complex multi-lane exits that branch…it’s intimidating. As somebody that has lived in the Boston area for 15 years now, I still mess things up.





  • I guess it depends what you want to do with it. I do lots of product photography, so I know exactly what lens I need for my studio and the type of product I am shooting. So, I spent about as much on a lens as I did a body. Getting a better sensor with more accurate colors saves me time in the post-processing step.

    When I was starting out, I just used a kit zoom lens, but realized that most of my shots were around the same focal length. So that is when I invested in a faster, nicer prime lens at that focal length.


  • wjs018@piefed.socialMtoAnime@ani.social[Meta] Rules Page and Update
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    6 months ago

    Also, this is neither here nor there, but I have been trying out piefed lately and it’s pretty cool. Thanks to the devs over there, the anime community should be a lot more compatible with piefed going forward thanks to squashing a couple bugs I found from our use cases. Specifically, piefed users should now enjoy:

    • Images on episode discussion threads don’t disappear when I edit in a screenshot submitted by a user (relevant issue)
    • Clips submitted to the community will no longer break the UI in Tile or Wide Tile view (relevant issue)