Find centralized, trusted content and collaborate around the technologies you use most. Connect and share knowledge within a single location that is structured and easy to search. The System Administrators have increased my file limit, but it has not corrected the issue.
Additionally, I don't have an issue with creating new files with vi. Could somebody answer exactly why this is happening? I have not made any recent changes to my git config and have verified that manually. It looks like you're getting EMFILE , which means that the number of files for an individual process is being exceeded. So, checking whether vi can open files is irrelevant— vi will use its own, separate file table.
Check your limits with:. So on my system, there is a limit of open files in a single process. You shouldn't need to ask your system administrator please don't use the acronym SA, it's too opaque; if you must abbreviate, use "sysadmin" to raise the limit. This could be a bug in Git or in a library, or it could be you're using an old version of something, or it could be something more bizarre.
Try strace first to see which files it opens, and check whether Git closes those files. After using the above recommendations, it turns out the error was caused by too many loose objects. There were too many loose objects because git gc wasn't being run often enough. From the git documentation :. When there are approximately more than this many loose objects in the repository, git gc --auto will pack them.
Some Porcelain commands use this command to perform a light-weight garbage collection from time to time. The default value is Here "Some Porcelain commands" includes git push , git fetch etc. Otherwise, you may disable git gc by setting git config gc. Set git config --global gc. If you picked a too small value, git gc would run too frequently, so choose wisely. If you set gc.
Merged by Junio C Hamano -- gitster -- in commit cdfe , 24 Apr The checksum verification is for detecting disk corruption, and for small projects, the time it takes to compute SHA-1 is not that significant, but for gigantic repositories this calculation adds significant time to every command. Git 2. See commit edf3b90 08 May by David Turner dturner-tw. Merged by Junio C Hamano -- gitster -- in commit faf , 30 May When " git checkout ", " git merge ", etc.
The untracked cache extension is copied across these operations now, which would speed up "git status" as long as the cache is properly invalidated. Merged by Junio C Hamano -- gitster -- in commit faf2 , 27 Aug We used to spend more than necessary cycles allocating and freeing piece of memory while writing each index entry out.
This has been optimized. Update Dec. See commit ee5e 04 Dec by Derrick Stolee derrickstolee. Merged by Junio C Hamano -- gitster -- in commit 97e1f85 , 13 Dec Since we already check the length and hex-values of the string before consuming the path, we can prevent extra computation by using the lower- level method.
OID object identifiers abbreviations use a cached list of loose objects per object subdirectory to make repeated queries fast, but there is significant cache load time when there are many loose objects.
Add a new performance test to pline-log. By limiting to commits, we more closely resemble user wait time when reading history into a pager.
Update March Git 2. Update: Git 2. See commit 77ff , commit , commit abb4bb8 , commit cb9c , commit 3b1d9e0 , commit ed0d 10 Oct by Ben Peart benpeart. Merged by Junio C Hamano -- gitster -- in commit e27bfaa , 19 Oct Note that on the 1,, files case, multi-threading the cache entry parsing does not yield a performance win. This is because the cost to parse the index extensions in this repo, far outweigh the cost of loading the cache entries. Add support for a new index. Merged by Junio C Hamano -- gitster -- in commit eba , 18 Jan The loose objects cache is filled one subdirectory at a time as needed.
So when querying a wide range of objects, the partially filled array needs to be resorted up to times, which takes over times longer than sorting once. This ensures that entries have to only be sorted a single time.
It also avoids eight binary search steps for each cache lookup as a small bonus. With Git 2. See commit 20a5fd8 18 Feb by Junio C Hamano gitster. See commit 3ab , commit da , commit 4f3bd56 , commit cc4aa28 , commit 2aaeb9a , commit ae0 , commit 4ebe , commit eaa8 , commit d9c9 , commit 55cb10f , commit f , commit d90fe06 14 Feb , and commit e03f , commit acac50d , commit cf8b 13 Feb by Jeff King peff.
Merged by Junio C Hamano -- gitster -- in commit 0df82d9 , 02 Mar Since we know the types of all of the objects, we just need to clear the result bits of any blobs. Here are perf results for the new test on git. See commit d0a , commit c79eddf , commit b25 , commit ed4b , commit feec , commit eccce52 , commit bee4 30 Mar by Jeff King peff.
Merged by Junio C Hamano -- gitster -- in commit af86 , 22 Apr But we should avoid hitting this case at all, and instead limit ourselves based on what malloc is willing to give us.
No test for obvious reasons. Note that this object was defined in sha1-array. See commit a4b6d20 , commit 4bdde33 , commit 22ad , commit d15d 07 Jan , and commit 0e5c , commit 4c3e , commit fa7ca5d , commit c , commit da8be8c 04 Jan by Derrick Stolee derrickstolee.
Merged by Junio C Hamano -- gitster -- in commit a0a2d75 , 05 Feb The conditional checks the existence of a directory separator at the correct location, but only after doing a string comparison.
Swap the order to be logically equivalent but perform fewer string comparisons. To test the effect on performance, I used a repository with over three million paths in the index. I then ran the following command on repeat:.
Karsten Blees has done so for msysgit, which dramatically improves performance on Windows. In my experiments, his change has taken the time for "git status" from 25 seconds to seconds on my Win7 machine running in a VM.
In general my mac is ok with git but if there are a lot of loose objects then it gets very much slower. It seems hfs is not so good with lots of files in a single directory. Will make a single pack file and remove any loose objects left over. It can take some time to run these. For what it's worth, I recently found a large discrepancy beween the git status command between my master and dev branches. To cut a long story short, I tracked down the problem to a single MB file in the project root directory.
It was an accidental checkin of a database dump so it was fine to delete it. I have , objects in store, but it appears that large files are more of a menace than many small files. You could try passing the --aggressive switch to git gc and see if that helps:. You also might try git repack. Perhaps disable spotlight for your code dir.
Check Activity Monitor and see what processes are running. I'd create a partition using a different file system. The text was updated successfully, but these errors were encountered:. I believe we're hitting this in Cargo as well. We're using an index on GitHub which has lots of little files and lots of updates over time. Sometimes when we do a git update of the repository it causes a huge number of file descriptors to get opened, often reaching close to the system limit or blowing the system limit.
In tracking this down I found that this program is all that's needed to trigger the behavior, namely starting a revwalk that looks at the repository. I believe this is internally done during the fetch operation for smart transports. Notably, though, I ran git gc in the repository and the problem goes away entirely. Both print statements in that program print 4, so no extra files are being created.
Out of curiosity, is there something we should be doing in Cargo to mitigate this issue? I'm gonna go run git gc manually in a few repos for now, but it'd be great if we could have Cargo do this for you!
Sorry, something went wrong. We have the same issue here and it's causing a lot of problems. However we are unable to depend on git being available locally hence using libgit2 to perform gc. Are there other suggested workarounds? I just this this as well I believe , and I have the same issue as tommoor ; I can't necessarily rely on git being available locally to run git repack. And unfortunately I can't alter my ulimit.
Are there other workarounds available? We had to abandon libgit2 unfortunately. Thanks for the update. RepackObjects method, but I'm not quite ready to rewrite everything in go-git, and it seems silly to use two git libraries. If you can do it from a licensing POV I'd recommend bundling the vanilla git binary with your application, it's much faster than both libgit2 and go-git and worth the extra weight if you ask me.
0コメント