Photoshop :: What PC Setup For Image Processing Of...
Jun 18, 2009
I am planning to get a new PC to work on large image files (1-3 GB) in PS CS4 (I work with 8x10 inch view camera and then scan the negatives). I was thinking about Core i7 with Quadro PNY FX1800 and 6GB RAM. Is this a good choice or rather overkill ? Off course I was also thinking about a ProMac, but could not find real advantages over a PC.
Today I'm working on actions that embody steps that do upsampling, and I'm using some of my panoramic images as test input with Photoshop CS6 x64 on Windows 7 I just calculated how many pixels are in the upsampled images, to get an idea of how stressful these actions will be on systems when people run them...
A 25,659 x 6,069 pixel image is no small thing in itself, at 155 megapixels... But upsampling it to 320% original size in both horizontal and vertical dimensions yielded 82,108 x 19,420 pixels - a 1.5 gigapixel image! And I'm working at 16 bits/channel. I was a bit surprised that I am just regularly chunking through gigapixel sized data. And I'm multitasking all the while - browsing, keeping up with eMail, listening to streaming internet radio...
The take-away from this is that with today's computer power and resources Photoshop just blazes through gigapixel+ sized documents now!Seems to me it wasn't THAT long ago we heard about the world's first gigapixel image, and now we're actually starting to hear about the first terapixel images, which are still challenging to make.
advanced batch image processing. I need the following to happen to a folder of images:
1) All images in the folder are resized with constrained proportion to a width of 250px.
2.) They are then combined into one image file placed above each other in a column (literally, not layers). Each image also needs spacing between the other, about 20px
3.) This is then compressed as an 8 quality jpeg and exported as a new image.
So basically I have a folder of images that I want to convert into a long 'gallery' image with spacing between each.
I've been told that I've got 700 images to process and remove the greyish background from each of them so they appear to be on a completely white background.
1) All images in the folder are resized with constrained proportion to a width of 250px.
2.) They are then combined into one image file placed above each other in a column (literally, not layers). Each image also needs spacing between the other, about 20px
3.) This is then compressed as an 8 quality jpeg and exported as a new image.
So basically I have a folder of images that I want to convert into a long 'gallery' image with spacing between each. I know this isn't explained well, but is it at all possible?
There's some image processing commands in photoshop, like high pass and blur (i.e. low pass). I want more. Is there a way to achieve frequency doubling, or better yet, frequency x N where N is any number the user can specify?
This may be useful for making coarse skin texture look finer. It can be implemented by diving the selected area into small squares, and then shrink each squares to half their original sizes. This would make the details look finer grain (hence frequency doubling). It would open up gaps between the squares, which can be filled in by taking additional samples and shrinking them to fill the gaps. This is similar to frequency doubling in audio processing.
I am a Retoucher and I have a new client that sends me 150 - 200 Real Estate photos per day. They shoot the photos so that I have 1 ambient exposure (Background Layer) and 1 flash exposure (Layer 1) which I then blend together with a recipe I designed for their look. I need to find a way to speed up the processing and although my action executes the look exactly how I need it in a very efficient manner the real time killer is the opening & layering steps.
I know this a long shot but my ideal solution would be to drag the folder of images which are sorted and/or labelled in the correct order in to a droplet and from there it processes the images one at a time but layering the files (ambient as "Background" and flash as "Layer 1"), running the action, has a stop-action on lens correction, asks me where to save, closes the image and continues to the next image.
New to Adobe, I have had several issues with consumers images. I can not find how to save as jpeg and keep photo intergrity. Either the processer (local pharmacy, big name store.. etc) is cropping my images badly. I have been told this is my problem when I save in jpeg format. How do i protect my image? Resize with the pixels? P
When one decides to edit a photograph what do people use as a rule of thumb to know what to do 1st.Assuming one will change contrast, do something to the color, enlarge a photo, use onOne software to create a frame, delete a wire etc.
I am having quite a difficult time with Autocad 2012. I have worked on a huge file containing an exploded 3D Drawing. Actually I am working on a Catalogue and have to importe and place view of the exploded 3d in boxes and xref bmp files and so on. After a number of computer crashes this morning, I have found that I cannot open the file through it is displayed in preview. At work, I can see part of the file and then a image process something at the bottom left of the screen and I can do nothing more than close Autocad or wait for minutes without ending.
I use gimp for image processing in my job as Forensic Questioned Document Examiner. One of the requirements when we elaborate an image is to be able to relate what modifications have been done to the image, in order to make the result reproducible.
So I was wondering: is there a feature or plugin that records all the modifications, filters, ... applied to an image (taking of course into account undos etc), and perhaps saves them into the resulting file (gimp format at least ) for later reference and inspection?
I have three (and a half-follow-on) questions on DNG nuances/technical particulars (that are prompted by remarks appearing on the last-updated-about-two-years-ago webpage URL...
First, what (If any) image processing capabilities (such as any further highlight recovery/adjustment, or individual color channel enhancement/ saturation improvement, etc.) are unavailable in Lightroom (LR) or Adobe Camera Raw (ACR) if one opens-with/inputs-or-imports-to LR (or ACR), a Linear DNG (linDNG) image file (which by its ‘linearized’ nature has already had certain image attributes ‘baked’ therein), and what image processing capabilities are fully available in LR/ACR for such a linDNG image in an unrestricted/fully-further-adjustable manner?
Second, is the output file produced by Adobe’s DNG converter when one inputs to it (say) a Nikon NEF raw-image file a (what I refer to as) Raw DNG (rawDNG, akin to those that some camera manufacturers’ cameras are now directly storing on memory cards at image capture), or a linDNG (akin to those produced by various RAW converters such as DxO Optics Pro and Capture One Pro)? If the former, is it possible to apply Adobe’s DNG converter to a linDNG input file (and thereby, effectively, ‘unbake’ whatever settings had earlier been ‘baked’ into that linDNG so that the resulting rawDNG would be once more ENTIRELY/UNRESTRICTEDLY further adjustable in LR/ACR?
Third, if one uses either LR or ACR to open/access/edit a linDNG image file, performs some non-destructive image adjustments on that image with that software platform, and saves the results as a new/differently-named DNG file, will that resulting new DNG file be yet another linDNG-format file, or a rawDNG-format file?
I'm experimenting with a digital IR camera. The images are very "magenta" in color and everyone states to simply "swap the Red and Blue" color channels. How can this be done in Paint.net? Is there a plugin that would process IR color pictures?
I opened a 1024x1024 image in GIMP, and manually broke it up into fully black and fully white sections. I'd like to output a file which is just a simple binary array of 8 bit numbers, such that every black pixel in the image has the value 0, and every white pixel has the value 1. This way, I can open it in my C program and then load the array into memory using the fread function.
I haven't been able to work out how to do this. I need to avoid using any files with metadata in them...And unfortunately, even the ppm format has a handful of characters at the beginning. Is there a way to do this?
I need to know how to do the above procedure in lr5 as when i did the processing of one image i saw no tab to click to save the image or move the image or clicking done?
I would like to repeat the following procedure on a large number of images but I am unable to find a way of batch processing to do this for me. The idea is to end up with a small plain boarder around the original image. This is to prevent any image being lost when I order prints due to cropping.
1) open image 2) copy the image 3) create a new image 0.5" larger than the original 4) paste the copied image into the larger new image 5) resize the new image to a given size for example 10x8 for printing 6) Save the image under new name or in a different folder to the original.
In our drawing office we use Inventor in a mining materials handling environment. Our output is mainly structural drawings where we require a bill of materials as part of creating a drawing. Whenever I insert a BOM in the drawing environment I get this table which is labeled in a foreign language (it looks like an Eastern European language) and the format is not as per the standard Autodesk installation setup.
How do I either revert back to the standard setup or do a BOM setup from scratch?
I want to setup the viewport background, so I modified the jpg and scaled to 10cm 10cm. Also I set the system unit to 1 unit=1cm and display unit scale to cm. When imported, the jpg do not match to grid. And the size will enlarged to 400cmX400cm.
We are possibly going to start shooting volume HDR images. Time will be a factor when shooting 4 locations a day and still having to process. Is there a way to stack the images at the end of the job like an action, then when I get back to the studio, I can tweak the images?
Computers today are including more and more cores. I am looking to upgrade my machine and I have looked at an AMD quad core processor and their new FX 8 core processor. The computers I have looked at are are the same expect the processor (Quad vs 8 core). From my understanding the more cores doesn't mean more speed unless the software can actually use the power. So can Photoshop (CS6) actually use and benefit by having 8 cores than 4?(For that matter can other Creative Suite products also use that much power?)
Is there a way to link together several computers to process large documents more quickly? I have seen programs like Qmaster and Compressor for Final Cut users–is there anything like that for processing in Photoshop?
i need cs3 to make 2 different sizes of jpegs from hundreds of tiffs for me, and when i set it up it always just does this to the 1st file in the folder,
I have some images which I want to increase the border pixels by about 2-3pixels. Then I want to add a shadow effect. I need to do this for around 150 images. I've tried to do it via new action/record and then batch processing but its not doing the job.
I'm going on a scrapbooking retreat this weekend and need to develop 200+ photos and upload to CVS.com. I have all my pictures in .jpg format now but I need to get the overall size of the files under 6mb to work with CVS's website.
I know that CS2 has a batch process to take the file size to certain pixel dimension but wanted to know if anyone knew a way to do it to fit overall file size vs. dimension.
I have Adobe Photoshop 7 and am trying to process my photos from a Canon Powershot A80. This camera does not seem to handle blue skies very well. Every picture I take up close, particularly using artificial light, seems to come out fine. Pictures taken in the daylight under a hazy or semi-hazy blue sky look bad throughout, as you will see in my linked photo.
Many of my photos turn out the same way as my linked photo does. I've tried various tutorials and nothing I adjust makes these photos look "real" again, at least not real in the sense of being clear and crisp.