Compare commits

..

66 Commits

Author SHA1 Message Date
Giteabot
efe7561787 Match api migration behavior to web behavior (#23552) (#23572)
Backport #23552 by @atomaka

When attempting to migrate a repository via the API endpoint comments
are always included. This can create a problem if your source repository
has issues or pull requests but you do not want to import them into
Gitea that displays as something like:

> Error 500: We were unable to perform the request due to server-side
problems. 'comment references non existent IssueIndex 4

There are only two ways to resolve this:
1. Migrate using the web interface
2. Migrate using the API including at issues or pull requests.

This PR matches the behavior of the API migration router to the web
migration router.

Co-authored-by: Andrew Tomaka <atomaka@atomaka.com>
Co-authored-by: Lauris BH <lauris@nix.lv>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2023-03-19 15:29:57 +08:00
Zettat123
e01e78a947 Handle missing README in create repos API (#23387) (#23509)
Backport #23387 
Close #22934

In `/user/repos` API (and other APIs related to creating repos), user
can specify a readme template for auto init. At present, if the
specified template does not exist, a `500` will be returned . This PR
improved the logic and will return a `400` instead of `500`.
2023-03-16 21:05:41 -04:00
Giteabot
04d489dbdd Make branches list page operations remember current page (#23420) (#23459)
Backport #23420 by @wxiaoguang

Close #23411

Always pass "page" query parameter to backend, and make backend respect
it.

The `ctx.FormInt("limit")` is never used, so removed.

Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Co-authored-by: Jason Song <i@wolfogre.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: KN4CK3R <admin@oldschoolhack.me>
2023-03-15 13:13:27 +01:00
Yarden Shoham
491ee43082 Fix due date being wrong on issue list (#23475) (#23479)
Backport #23475

Exactly like #22302 but in the issue list page

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2023-03-15 09:41:09 +08:00
zeripath
9107a87ff6 Redirect to the commit page after applying patch (#23056) & Fix commit name in Apply Patch page (#23086) (#23131)
Backport #23056
Backport #23086

Fixes https://github.com/go-gitea/gitea/issues/22621

Fixes
https://github.com/go-gitea/gitea/issues/22621#issuecomment-1439309200

Co-authored-by: yp05327 <576951401@qq.com>
2023-03-15 08:06:02 +08:00
sillyguodong
3dc2724d36 Fix cannot reopen after pushing commits to a closed PR (#23189) (#23322)
Backport: #23189
Close: #22784

1. On GH, we can reopen a PR which was closed before after pushing
commits. After reopening PR, we can see the commits that were pushed
after closing PR in the time line. So the case of
[issue](https://github.com/go-gitea/gitea/issues/22784) is a bug which
needs to be fixed.

2. After closing a PR and pushing commits, `headBranchSha` is not equal
to `sha`(which is the last commit ID string of reference). If the
judgement exists, the button of reopen will not display. So, skip the
judgement if the status of PR is closed.


![image](https://user-images.githubusercontent.com/33891828/222037529-651fccf9-0bba-433e-b2f0-79c17e0cc812.png)

3. Even if PR is already close, we should still insert comment record
into DB when we push commits.
So we should still call  function `CreatePushPullComment()`.


067b0c2664/services/pull/pull.go (L260-L282)
So, I add a switch(`includeClosed`) to the
`GetUnmergedPullRequestsByHeadInfo` func to control whether the status
of PR must be open. In this case, by setting `includeClosed` to `true`,
we can query the closed PR.


![image](https://user-images.githubusercontent.com/33891828/222621045-bb80987c-10c5-4eac-aa0c-1fb9c6aefb51.png)

4. In the loop of comments, I use the`latestCloseCommentID` variable to
record the last occurrence of the close comment.
In the go template, if the status of PR is closed, the comments whose
type is `CommentTypePullRequestPush(29)` after `latestCloseCommentID`
won't be rendered.


![image](https://user-images.githubusercontent.com/33891828/222058913-c91cf3e3-819b-40c5-8015-654b31eeccff.png)
e.g.
1). The initial status of the PR is opened.


![image](https://user-images.githubusercontent.com/33891828/222453617-33c5093e-f712-4cd6-8489-9f87e2075869.png)
2). Then I click the button of `Close`. PR is closed now.


![image](https://user-images.githubusercontent.com/33891828/222453694-25c588a9-c121-4897-9ae5-0b13cf33d20b.png)
3). I try to push a commit to this PR, even though its current status is
closed.


![image](https://user-images.githubusercontent.com/33891828/222453916-361678fb-7321-410d-9e37-5a26e8095638.png)
But in comments list, this commit do not display.This is as expected :)


![image](https://user-images.githubusercontent.com/33891828/222454169-7617a791-78d2-404e-be5e-77d555f93313.png)
4). Click the `Reopen` button, the commit which is pushed after closing
PR display now.


![image](https://user-images.githubusercontent.com/33891828/222454533-897893b6-b96e-4701-b5cb-b1800f382b8f.png)
2023-03-06 11:38:45 -06:00
Giteabot
93fe0202cb Change interactiveBorder to fix popup preview (#23169) (#23313)
Backport #23169

Close #23073. 
Used the solution as reference to the reply:
https://github.com/go-gitea/gitea/issues/23073#issuecomment-1440124609
Here made the change inside the `contextpopup.js` because this is where
the popup component is created and tippy configuration is given.

Co-authored-by: Hester Gong <hestergong@gmail.com>
2023-03-06 10:32:47 -06:00
Giteabot
13f304d89e Properly flush unique queues on startup (#23154) (#23200)
Backport #23154

There have been a number of reports of PRs being blocked whilst being
checked which have been difficult to debug. In investigating #23050 I
have realised that whilst the Warn there is somewhat of a miscall there
was a real bug in the way that the LevelUniqueQueue was being restored
on start-up of the PersistableChannelUniqueQueue.

Next there is a conflict in the setting of the internal leveldb queue
name - This wasn't being set so it was being overridden by other unique
queues.

This PR fixes these bugs and adds a testcase.

Thanks to @brechtvl  for noticing the second issue.

Fix #23050
and others

Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
Co-authored-by: delvh <leon@kske.dev>
2023-03-06 22:34:54 +08:00
Adi
805c5926ff Add CLI option tenant ID for oauth2 source (#22769) (#23263)
Backport #22769

Fixes #22713

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2023-03-06 18:20:07 +08:00
Giteabot
b6eea680ce Fill head commit to in payload when notifying push commits for mirroring (#23215) (#23291)
Backport #23215

Just like what has been done when pushing manually:

7a5af25592/services/repository/push.go (L225-L226)

Before:

<img width="448" alt="image"
src="https://user-images.githubusercontent.com/9418365/222100123-cd4839d1-2d4d-45f7-b7a0-0cbc73162b44.png">

After:

<img width="448" alt="image"
src="https://user-images.githubusercontent.com/9418365/222100225-f3c5bb65-7ab9-41e2-8e39-9d84c23c352d.png">

Co-authored-by: Jason Song <i@wolfogre.com>
Co-authored-by: delvh <dev.lh@web.de>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2023-03-05 11:06:44 +01:00
zeripath
9c33aff689 Do not create commit graph for temporary repos (#23219) (#23302)
Backport #23219

When fetching remotes for conflict checking, skip unnecessary and
potentially slow writing of commit graphs.

In a test with the Blender repository, this reduces conflict checking
time for one pull request from about 2s to 0.1s.

---------

Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: Brecht Van Lommel <brecht@blender.org>
2023-03-05 01:20:02 +01:00
anthony-zh
a3e185bc5c Fix broken links in the Chinese docs (#23289)
When I used docusaurus to make Chinese documents, I found that there
were links on several Chinese pages with problems on docusaurus, so I
modified the links on some pages and removed some useless tags

---------

Co-authored-by: yeyuanjie <yecao100@126.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2023-03-04 22:32:12 +08:00
Giteabot
2f1d968b27 Fix GetFilesChangedBetween if the file name may be escaped (#23272) (#23278)
Backport #23272

The code for GetFilesChangedBetween uses `git diff --name-only
base..head` to get the names of files changed between base and head
however this forgets that git will escape certain values.

This PR simply switches to use `-z` which has the `NUL` character as the
separator.

Ref https://github.com/go-gitea/gitea/pull/22568#discussion_r1123138096

Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
2023-03-04 02:11:50 -05:00
Giteabot
0c212b3f08 Use the correct selector to hide the checkmark of selected labels on clear (#23224) (#23226)
Backport #23224

Regression of #10107
(https://github.com/go-gitea/gitea/pull/10107/files#diff-a15e36f2f9c13339f7fdd38bc2887db2ff2945cb8434464318ab9105fcc846bdR460)

Fix #22222


Before: the "clear" action couldn't remove these check marks.


![image](https://user-images.githubusercontent.com/2114189/222212998-c9f33459-b71d-4e80-8588-2935f3b7050c.png)


After: the "clear" action can remove these  check marks.


![image](https://user-images.githubusercontent.com/2114189/222213048-2be98ed0-cac0-4e27-b72c-1dd0ac2637d5.png)

Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
2023-03-01 15:59:54 -05:00
Yarden Shoham
ceedb4973e Change button text for commenting and closing an issue at the same time (#23135) (#23181)
Backport #23135

Close  #10468

Without SimpleMDE/EasyMDE, using Simple Textarea, the button text could
be changed when content changes.

After introducing SimpleMDE/EasyMDE, there is no code for updating the
button text.



![image](https://user-images.githubusercontent.com/2114189/221334034-8d556cd5-1136-4ba0-8faa-a65ffadd7fb7.png)

---------

Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
2023-02-28 17:54:38 -05:00
Giteabot
f5f4a8d02a Pass --global when calling git config --get, for consistency with git config --set (#23157) (#23198)
Backport #23157

This arose out of #22451; it seems we are checking using non-global
settings to see if a config value is set, in order to decide whether to
call another global(-indeed) configuration command. This PR changes it
so that both the check and the set are for global configuration.

Co-authored-by: Philip Peterson <philip-peterson@users.noreply.github.com>
2023-02-28 17:53:27 -05:00
Giteabot
543322f81f Make gitea serv respect git binary home (#23138) (#23196)
Backport #23138

Close #23137

The old code is too old (8-9 years ago)

Let's try to execute the git commands from git bin home directly.

The verb has been checked above, it could only be:
* git-upload-pack
* git-upload-archive
* git-receive-pack
* git-lfs-authenticate

Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
2023-02-28 16:46:27 -06:00
Yarden Shoham
35a3b452d9 Add word-break to sidebar-item-link (#23146) (#23179)
Backport #23146

Fixes https://github.com/go-gitea/gitea/issues/22953

![image](https://user-images.githubusercontent.com/18380374/221351117-1e4b8922-04ca-4717-8e3b-c338a61bc062.png)

Co-authored-by: yp05327 <576951401@qq.com>
Co-authored-by: delvh <leon@kske.dev>
2023-02-27 21:30:06 +00:00
HesterG
5a60e023af Add wrapper to author to avoid long name ui problem (#23030) (#23172)
Backport #23030 

This PR is a possible solution for issue #22866. Main change is to add a
`author-wrapper` class around author name, like the wrapper added to
message. The `max-width` is set to 200px on PC, and 100px on mobile
device for now. Which will work like below:

<img width="1183" alt="2023-02-21 11 57 53"
src="https://user-images.githubusercontent.com/17645053/220244146-3d47c512-33b6-4ed8-938e-de0a8bc26ffb.png">

<img width="417" alt="2023-02-21 11 58 43"
src="https://user-images.githubusercontent.com/17645053/220244154-1ea0476b-9d1c-473a-9917-d3216860f9a9.png">

And `title` is added to the wrapper like it did in message wrapper. So
the full author name will show on hover.
2023-02-27 22:47:04 +08:00
Yarden Shoham
1170e067b2 Fix DBConsistency checks on MSSQL (#23132) (#23133)
Backport #23132

Unfortunately xorm's `builder.Select(...).From(...)` does not escape the
table names. This is mostly not a problem but is a problem with the
`user` table.

This PR simply escapes the user table. No other uses of `From("user")`
where found in the codebase so I think this should be all that is
needed.

Fix #23064

Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: zeripath <art27@cantab.net>
2023-02-25 21:17:14 +02:00
Yarden Shoham
8adc6a188e Fix height for sticky head on large screen on PR page (#23111) (#23124)
Backport #23111

Right now on the PR 'File Change' Tab, the file title header sticky to
the top on large screens has wrong height, resulting in wrong ui
behavior when scrolling down. This PR is to fix this.

Before:

<img width="964" alt="截屏2023-02-24 17 12 29"
src="https://user-images.githubusercontent.com/17645053/221140409-025c4a84-6bbe-4b5b-a13f-bd2b79063522.png">

After:
<img width="1430" alt="截屏2023-02-24 21 10 12"
src="https://user-images.githubusercontent.com/17645053/221186750-0344d652-4610-4a90-a4c0-7f6269f950d6.png">

---------

Co-authored-by: HesterG <hestergong@gmail.com>
Co-authored-by: Andrew Thornton <art27@cantab.net>
2023-02-24 16:35:59 +00:00
John Olheiser
48eb5ac685 Changelog 1.18.5 (#23045)
Signed-off-by: jolheiser <john.olheiser@gmail.com>
2023-02-21 12:14:55 -06:00
Yarden Shoham
8f5b2f1ddf Return empty url for submodule tree entries (#23043) (#23048)
Backport #23043

Close #22614.

Refer to [Github's
API](https://docs.github.com/en/rest/git/trees?apiVersion=2022-11-28#get-a-tree),
if a tree entry is a submodule, its url will be an empty string.

Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-authored-by: delvh <leon@kske.dev>
Co-authored-by: John Olheiser <john.olheiser@gmail.com>
2023-02-21 12:35:14 -05:00
John Olheiser
bbfd34575a Display attachments of review comment when comment content is blank (#23035) (#23046)
Backport #23035

Co-authored-by: sillyguodong <33891828+sillyguodong@users.noreply.github.com>
2023-02-21 11:10:29 -06:00
Kyle D
760cf419ba Use beforeCommit instead of baseCommit (#22949) (#22996)
Backport #22949
Fixes https://github.com/go-gitea/gitea/issues/22946
Probably related to https://github.com/go-gitea/gitea/issues/19530

Co-authored-by: Jonathan Tran <jonnytran@gmail.com>
2023-02-21 10:51:02 -05:00
Jason Song
90982bffa5 Add force_merge to merge request and fix checking mergable (#23010) (#23032)
Backport #23010.

Fix #23000.

The bug was introduced in #22633, and it seems that it has been noticed:
https://github.com/go-gitea/gitea/pull/22633#discussion_r1095395359 .

However, #22633 did nothing wrong, the logic should be "check if they is
admin only when `force` is true".

So we should provide the `ForceMerge` when merging from UI.

After this, an admin can also send a normal merge request with
`ForceMerge` false. So it fixes a potential bug: if the admin doesn't
want to do a force merge, they just see the green "Merge" button and
click it. At the same time, the status of the PR changed, and it
shouldn't be merged now, so the admin could send an unexpected force
merge.

In addition, I updated `ForceMerge *bool` to `ForceMerge bool`, I don't
see the reason to use a pointer.

And fixed the logic of CheckPullMergable to handle auto merge and force
merge correctly.
2023-02-21 09:42:22 -06:00
Yarden Shoham
8fa62be905 Render access log template as text instead of HTML (#23013) (#23025)
Backport #23013

Fix https://github.com/go-gitea/gitea/pull/22906#discussion_r1112106675

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2023-02-21 18:04:57 +08:00
wxiaoguang
7b3ffe5745 Fix the Manually Merged form (#23015) (#23017)
Backport #23015

---------

Co-authored-by: Jason Song <i@wolfogre.com>
2023-02-21 18:04:29 +08:00
wxiaoguang
c50d4202ef Use --message=%s for git commit message (#23028) (#23029)
Backport #23028

This backport is done by manually because the git module is different.
2023-02-21 14:16:25 +08:00
Yarden Shoham
660a83bd2e Hide 2FA status from other members in organization members list (#22999) (#23023)
Backport #22999

This is rather private information that should not be given to all
members in the same organization. Only show it to organization owners.

Co-authored-by: Brecht Van Lommel <brecht@blender.org>
2023-02-20 19:14:39 -05:00
Lunny Xiao
4c7786b3b6 Add 1.18.4 changelog (#22991)
Feel free to change the content. @go-gitea/maintainers

---------

Co-authored-by: delvh <dev.lh@web.de>
2023-02-20 10:45:07 +08:00
zeripath
c702e7995d Provide the ability to set password hash algorithm parameters (#22942) (#22943)
Backport #22942

This PR refactors and improves the password hashing code within gitea
and makes it possible for server administrators to set the password
hashing parameters

In addition it takes the opportunity to adjust the settings for `pbkdf2`
in order to make the hashing a little stronger.

The majority of this work was inspired by PR #14751 and I would like to
thank @boppy for their work on this.

Thanks to @gusted for the suggestion to adjust the `pbkdf2` hashing
parameters.

Close #14751

---------

Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: delvh <dev.lh@web.de>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2023-02-19 15:35:52 +08:00
Yarden Shoham
b2e58edd74 Notify on container image create (#22806) (#22965)
Backport #22806

Fixes #22791

---------

Signed-off-by: Yarden Shoham <hrsi88@gmail.com>
Co-authored-by: KN4CK3R <admin@oldschoolhack.me>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2023-02-18 21:54:22 +08:00
Yarden Shoham
98b7714c3b Fix 404 error viewing the LFS file (#22945) (#22948)
Backport #22945

Fix #22734.

According to
[`view_file.tmpl`](https://github.com/go-gitea/gitea/blob/main/templates/repo/view_file.tmpl#L82),
`lfs_file.tmpl` should use `AssetUrlPrefix` instead of `AppSubUrl`.

Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-authored-by: Jason Song <i@wolfogre.com>
2023-02-17 15:22:05 +08:00
zeripath
9da4642c8c Fix blame view missing lines (#22826) (#22929)
Backport #22826

Creating a new buffered reader for every part of the blame can miss
lines, as it will read and buffer bytes that the next buffered reader
will not get.

---------

Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: Brecht Van Lommel <brecht@blender.org>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2023-02-17 10:19:24 +08:00
Lunny Xiao
1d191f9b5a some refactor about code comments(#20821) (#22707)
fix #22691
backport #20821

Co-authored-by: zeripath <art27@cantab.net>
2023-02-16 21:21:25 +08:00
zeripath
2e1afd54b2 Add command to bulk set must-change-password (#22823) (#22928)
Backport #22823

As part of administration sometimes it is appropriate to forcibly tell
users to update their passwords.

This PR creates a new command `gitea admin user must-change-password`
which will set the `MustChangePassword` flag on the provided users.
---------

Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: Jason Song <i@wolfogre.com>
2023-02-16 20:33:24 +08:00
Yarden Shoham
9e68261ca7 fix incorrect role labels for migrated issues and comments (#22914) (#22923)
Backport #22914

Fix #22797.

## Reason
If a comment was migrated from other platforms, this comment may have an
original author and its poster is always not the original author. When
the `roleDescriptor` func get the poster's role descriptor for a
comment, it does not check if the comment has an original author. So the
migrated comments' original authors might be marked as incorrect roles.

Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2023-02-16 11:19:46 +08:00
zeripath
e4238583db Improve trace logging for pulls and processes (#22633) (#22812)
Backport #22633

Our trace logging is far from perfect and is difficult to follow.

This PR:

* Add trace logging for process manager add and remove.
* Fixes an errant read file for git refs in getMergeCommit
* Brings in the pullrequest `String` and `ColorFormat` methods
introduced in #22568
* Adds a lot more logging in to testPR etc.

Ref #22578

---------
Signed-off-by: Andrew Thornton <art27@cantab.net>
2023-02-13 11:17:36 +08:00
Yarden Shoham
656d5a144f Fix PR file tree folders no longer collapsing (#22864) (#22872)
Backport #22864

Collapsing folders currently just throws a console error

```
index.js?v=1.19.0~dev-403-gb6b8feb3d:10 TypeError: this.$set is not a function
    at Proxy.handleClick (index.js?v=1.19.0~dev-403-gb6b8feb3d:58:7159)
    at index.js?v=1.19.0~dev-403-gb6b8feb3d:58:6466
    at index.js?v=1.19.0~dev-403-gb6b8feb3d:10:93922
    at ce (index.js?v=1.19.0~dev-403-gb6b8feb3d:10:1472)
    at Q (index.js?v=1.19.0~dev-403-gb6b8feb3d:10:1567)
    at HTMLDivElement.$e (index.js?v=1.19.0~dev-403-gb6b8feb3d:10:79198)
```

This PR fixes this and allows folders to be collapsed again.

Also:
- better cursor interaction with folders
- added some color to the diff detail stats
- remove green link color from all the file names

Screenshots:

![image](https://user-images.githubusercontent.com/9765622/218269712-2f3dda55-6d70-407f-8d34-2a5d9c8df548.png)

![image](https://user-images.githubusercontent.com/9765622/218269714-6ce8a954-daea-4ed6-9eea-8b2323db4d8f.png)

Co-authored-by: gempir <daniel.pasch.s@gmail.com>
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2023-02-12 13:14:19 +02:00
Yarden Shoham
43d1183f67 escape filename when assemble URL (#22850) (#22871)
Backport #22850

Fixes: #22843 

### Cause:

affdd40296/services/repository/files/content.go (L161)

Previously, we did not escape the **"%"** that might be in "treePath"
when call "url.parse()".


![image](https://user-images.githubusercontent.com/33891828/218066318-5a909e50-2a17-46e6-b32f-684b2aa4b91f.png)

This function will check whether "%" is the beginning of an escape
character. Obviously, the "%" in the example (hello%mother.txt) is not
that. So, the function will return a error.

### Solution:
We can escape "treePath" by call "url.PathEscape()" function firstly.

### Screenshot:

![image](https://user-images.githubusercontent.com/33891828/218069781-1a030f8b-18d0-4804-b0f8-73997849ef43.png)

Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: sillyguodong <33891828+sillyguodong@users.noreply.github.com>
Co-authored-by: Andrew Thornton <art27@cantab.net>
2023-02-12 09:39:52 +00:00
Gusted
8fa419c4c1 Use proxy for pull mirror (#22771) (#22772)
- Backport #22771
  - Use the proxy (if one is specified) for pull mirrors syncs.
- Pulled the code from
c2774d9e80/modules/git/repo.go (L164-L170)
  - Downstream issue: https://codeberg.org/forgejo/forgejo/issues/302

---------

Co-authored-by: zeripath <art27@cantab.net>
2023-02-11 16:11:54 +08:00
Jason Song
77c89572e9 Fix isAllowed of escapeStreamer (#22814) (#22837)
Backport #22814.

The use of `sort.Search` is wrong: The slice should be sorted, and
`return >= 0` doen't mean it exists, see the
[manual](https://pkg.go.dev/sort#Search).

Could be fixed like this if we really need it:

```diff
diff --git a/modules/charset/escape_stream.go b/modules/charset/escape_stream.go
index 823b63513..fcf1ffbc1 100644
--- a/modules/charset/escape_stream.go
+++ b/modules/charset/escape_stream.go
@@ -20,6 +20,9 @@ import (
 var defaultWordRegexp = regexp.MustCompile(`(-?\d*\.\d\w*)|([^\` + "`" + `\~\!\@\#\$\%\^\&\*\(\)\-\=\+\[\{\]\}\\\|\;\:\'\"\,\.\<\>\/\?\s\x00-\x1f]+)`)

 func NewEscapeStreamer(locale translation.Locale, next HTMLStreamer, allowed ...rune) HTMLStreamer {
+       sort.Slice(allowed, func(i, j int) bool {
+               return allowed[i] < allowed[j]
+       })
        return &escapeStreamer{
                escaped:                 &EscapeStatus{},
                PassthroughHTMLStreamer: *NewPassthroughStreamer(next),
@@ -284,14 +287,8 @@ func (e *escapeStreamer) runeTypes(runes ...rune) (types []runeType, confusables
 }

 func (e *escapeStreamer) isAllowed(r rune) bool {
-       if len(e.allowed) == 0 {
-               return false
-       }
-       if len(e.allowed) == 1 {
-               return e.allowed[0] == r
-       }
-
-       return sort.Search(len(e.allowed), func(i int) bool {
+       i := sort.Search(len(e.allowed), func(i int) bool {
                return e.allowed[i] >= r
-       }) >= 0
+       })
+       return i < len(e.allowed) && e.allowed[i] == r
 }
```

But I don't think so, a map is better to do it.
2023-02-10 11:36:58 +08:00
John Olheiser
68b908d92a Load issue before accessing index in merge message (#22822) (#22830)
Backport #22822

---------

Signed-off-by: jolheiser <john.olheiser@gmail.com>
2023-02-09 16:53:14 -05:00
Yarden Shoham
638fbd0b78 add default user visibility to cli command "admin user create" (#22750) (#22760)
Backport #22750

Fixes https://github.com/go-gitea/gitea/issues/22523

Co-authored-by: yp05327 <576951401@qq.com>
2023-02-08 11:04:38 -06:00
Yarden Shoham
3647e62ef9 Fix color of tertiary button on dark theme (#22739) (#22744)
Backport #22739

Before:
<img width="266" alt="Screenshot 2023-02-03 at 14 07 34"
src="https://user-images.githubusercontent.com/115237/216611151-92e98305-c4b5-42f3-b2e2-8b1b805fa644.png">

After:
<img width="271" alt="Screenshot 2023-02-03 at 14 07 52"
src="https://user-images.githubusercontent.com/115237/216611156-878a8a75-39a1-415b-9b6d-4f035985444e.png">

This is the only instance of such a button in all templates.

Co-authored-by: silverwind <me@silverwind.io>
Co-authored-by: Lauris BH <lauris@nix.lv>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2023-02-08 23:44:40 +08:00
Yarden Shoham
37bbf2c902 Fix restore repo bug, clarify the problem of ForeignIndex (#22776) (#22794)
Backport #22776

Fix #22581

TLDR: #18446 made a mess with ForeignIndex and triggered a design
flaw/bug of #16356, then a quick patch #21271 helped #18446, then the
the bug was re-triggered by #21721 .

Related:
* #16356
* BasicIssueContext
https://github.com/go-gitea/gitea/pull/16356/files#diff-7938eb670d42a5ead6b08121e16aa4537a4d716c1cf37923c70470020fb9d036R16-R27
* #18446 
* If some issues were dumped without ForeignIndex, then they would be
imported as ForeignIndex=0
https://github.com/go-gitea/gitea/pull/18446/files#diff-1624a3e715d8fc70edf2db1630642b7d6517f8c359cc69d58c3958b34ba4ce5eR38-R39
* #21271
* It patched the above bug (somewhat), made the issues without
ForeignIndex could have the same value as LocalIndex
* #21721 
    * It re-triggered the zero-ForeignIndex bug.


ps: I am not sure whether the changes in `GetForeignIndex` are ideal (at
least, now it has almost the same behavior as BasicIssueContext in
#16356), it's just a quick fix. Feel free to edit on this PR directly or
replace it.

Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
2023-02-08 08:39:42 +00:00
KN4CK3R
a239d6c4a9 Use import of OCI structs (#22765) (#22805)
Backport of #22765

Co-authored-by: techknowlogick <techknowlogick@gitea.io>
2023-02-08 07:50:19 +08:00
Lunny Xiao
ff2014690d upgrade golangcilint to v1.51.0 (#22764)
With the upgrade to go 1.20 golangci-lint no longer correctly works. We must therefore upgrade to the latest golangci-lint.

Co-authored-by: techknowlogick <techknowlogick@gitea.io>
2023-02-07 19:28:25 +00:00
wxiaoguang
03c644c48c Escape path for the file list (#22741) (#22757)
Backport #22741
Fix #22740
2023-02-06 12:58:06 +00:00
Yarden Shoham
965376d476 use drone secrets for s3 config (#22770) (#22773) 2023-02-05 18:42:45 -05:00
zeripath
2e12161620 Fix bugs with WebAuthn preventing sign in and registration. (#22651) (#22721)
Partial Backport #22651

This PR fixes a longstanding bug within webauthn due to the backend
using URLEncodedBase64 but the javascript using decoding using plain
base64. This causes intermittent issues with users reporting decoding
errors.

Fix #22507

Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2023-02-02 15:45:57 +08:00
crystal
9cde526f87 Fix line spacing for plaintext previews (#22699) (#22701)
Backport #22699

Adding `<br>` between each line is not necessary since the entire file
is rendered inside a `<pre>`

fixes https://codeberg.org/Codeberg/Community/issues/915
2023-02-01 22:06:58 +00:00
Yarden Shoham
4c20be7c00 Add missing close bracket in imagediff (#22710) (#22712)
Backport #22710

There was a missing `]` in imagediff.js:

```
const $range = $container.find("input[type='range'"); 
```

This PR simply adds this.

Fix #22702

Co-authored-by: zeripath <art27@cantab.net>
2023-02-01 12:30:52 +00:00
Yarden Shoham
263d06f616 Fix wrong hint when deleting a branch successfully from pull request UI (#22673) (#22698)
Backport #22673

Fix #18785

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2023-02-01 01:35:38 +00:00
crystal
6dc16c1154 Fix README TOC links (#22577) (#22677)
Backport #22577 

Fixes anchored markup links by adding `user-content-` (which is
prepended to IDs)

Closes https://codeberg.org/Codeberg/Community/issues/894
2023-01-31 17:23:19 +08:00
Gusted
fd2c250b52 Don't return duplicated users who can create org repo (#22560) (#22562)
- Backport of #22560
- Currently the function `GetUsersWhoCanCreateOrgRepo` uses a query that
is able to have duplicated users in the result, this is can happen under
the condition that a user is in team that either is the owner team or
has permission to create organization repositories.
- Add test code to simulate the above condition for user 3,
[`TestGetUsersWhoCanCreateOrgRepo`](a1fcb1cfb8/models/organization/org_test.go (L435))
is the test function that tests for this.
  - The fix is quite trivial, use a map as a set to get distinct orgs.
2023-01-30 16:59:20 +00:00
John Olheiser
e6d6bce1f6 Fix missing message in git hook when pull requests disabled on fork (#22625) (#22658)
Backport #22625

Co-authored-by: Brecht Van Lommel <brecht@blender.org>
2023-01-30 08:55:45 +08:00
zeripath
a9ba7379fe Improve checkIfPRContentChanged (#22611) (#22644)
Backport #22611

The code for checking if a commit has caused a change in a PR is
extremely inefficient and affects the head repository instead of using a
temporary repository.

This PR therefore makes several significant improvements:

* A temporary repo like that used in merging.
* The diff code is then significant improved to use a three-way diff
instead of comparing diffs (possibly binary) line-by-line - in memory...

Ref #22578

Signed-off-by: Andrew Thornton <art27@cantab.net>
2023-01-28 17:56:16 +00:00
Yarden Shoham
6be1d71e2b Link issue and pull requests status change in UI notifications directly to their event in the timelined view. (#22627) (#22642)
Backport #22627

Adding the related comment to the issue and pull request status change
in the UI notifications allows to navigate directly to the specific
event in its dedicated view, easing the reading of last comments and to
the editor for additional comments if desired.

Co-authored-by: Felipe Leopoldo Sologuren Gutiérrez <fsologureng@users.noreply.github.com>
2023-01-28 15:51:00 +00:00
Yarden Shoham
9f5e44bf50 Use --index-url in PyPi description (#22620) (#22636) 2023-01-28 12:57:12 +08:00
Yarden Shoham
f204ff4ef7 Prevent duplicate labels when importing more than 99 (#22591) (#22598)
Backport #22591

Importing labels (via `gitea restore-repo`) did not split them up into
batches properly. The first "batch" would create all labels, the second
"batch" would create all labels except those in the first "batch", etc.
This meant that when importing more than 99 labels (the batch size)
there would always be duplicate ones.

This is solved by actually passing `labels[:lbBatchSize]` to the
`CreateLabels()` function, instead of the entire list `labels`.

Co-authored-by: Sybren <122987084+drsybren@users.noreply.github.com>
2023-01-24 14:48:21 -06:00
John Olheiser
f6cb7860a2 Changelog 1.18.3 (#22575)
Signed-off-by: jolheiser <john.olheiser@gmail.com>
2023-01-23 08:42:02 -06:00
Yarden Shoham
6068978c42 Prevent multiple To recipients (#22566) (#22569)
Backport #22566

Change the mailer interface to prevent the leaking of possible hidden
email addresses when sending to multiple recipients.

Co-authored-by: KN4CK3R <admin@oldschoolhack.me>
Co-authored-by: Gusted <williamzijl7@hotmail.com>
2023-01-22 11:37:26 -06:00
Yarden Shoham
c320caed97 Truncate commit summary on repo files table. (#22551) (#22552)
Backport #22551
There was an unintended regression in #21124 which assumed that
.commits-list .message-wrapper would only match the commit summaries on
/{owner}/{name}/commits/*. This assumption is incorrect as the
directory/file view also uses a .commits-list wrapper.

Rather than completely restructure this page this PR simply adjusts the
styling to again use display: inline-block; for #repo-files-table
.commit-list .message-wrapper

Fix #22360
2023-01-20 23:34:52 +08:00
silverwind
f1c826ed29 Mute all links in issue timeline (#22534)
Backport of https://github.com/go-gitea/gitea/pull/22533.
https://github.com/go-gitea/gitea/pull/21799 introduced a regression
where some links in the issue timeline were not muted any more. Fix it
by replacing all `class="text grey"` with `class="text grey
muted-links"` in the file.

Co-authored-by: KN4CK3R <admin@oldschoolhack.me>
2023-01-20 00:18:58 -05:00
172 changed files with 3157 additions and 1610 deletions
+30 -12
View File
@@ -769,10 +769,16 @@ steps:
image: woodpeckerci/plugin-s3:latest
pull: always
settings:
acl: public-read
bucket: gitea-artifacts
endpoint: https://ams3.digitaloceanspaces.com
path_style: true
acl:
from_secret: aws_s3_acl
region:
from_secret: aws_s3_region
bucket:
from_secret: aws_s3_bucket
endpoint:
from_secret: aws_s3_endpoint
path_style:
from_secret: aws_s3_path_style
source: "dist/release/*"
strip_prefix: dist/release/
target: "/gitea/${DRONE_BRANCH##release/v}"
@@ -790,10 +796,16 @@ steps:
- name: release-main
image: woodpeckerci/plugin-s3:latest
settings:
acl: public-read
bucket: gitea-artifacts
endpoint: https://ams3.digitaloceanspaces.com
path_style: true
acl:
from_secret: aws_s3_acl
region:
from_secret: aws_s3_region
bucket:
from_secret: aws_s3_bucket
endpoint:
from_secret: aws_s3_endpoint
path_style:
from_secret: aws_s3_path_style
source: "dist/release/*"
strip_prefix: dist/release/
target: /gitea/main
@@ -892,10 +904,16 @@ steps:
image: woodpeckerci/plugin-s3:latest
pull: always
settings:
acl: public-read
bucket: gitea-artifacts
endpoint: https://ams3.digitaloceanspaces.com
path_style: true
acl:
from_secret: aws_s3_acl
region:
from_secret: aws_s3_region
bucket:
from_secret: aws_s3_bucket
endpoint:
from_secret: aws_s3_endpoint
path_style:
from_secret: aws_s3_path_style
source: "dist/release/*"
strip_prefix: dist/release/
target: "/gitea/${DRONE_TAG##v}"
+3
View File
@@ -173,3 +173,6 @@ issues:
linters:
- revive
text: "exported: type name will be used as user.UserBadge by other packages, and that stutters; consider calling this Badge"
- path: models/db/sql_postgres_with_schema.go
linters:
- nolintlint
+58
View File
@@ -4,6 +4,64 @@ This changelog goes through all the changes that have been made in each release
without substantial changes to our git log; to see the highlights of what has
been added to each release, please refer to the [blog](https://blog.gitea.io).
## [1.18.5](https://github.com/go-gitea/gitea/releases/tag/v1.18.5) - 2023-02-21
* ENHANCEMENTS
* Hide 2FA status from other members in organization members list (#22999) (#23023)
* BUGFIXES
* Add force_merge to merge request and fix checking mergable (#23010) (#23032)
* Use `--message=%s` for git commit message (#23028) (#23029)
* Render access log template as text instead of HTML (#23013) (#23025)
* Fix the Manually Merged form (#23015) (#23017)
* Use beforeCommit instead of baseCommit (#22949) (#22996)
* Display attachments of review comment when comment content is blank (#23035) (#23046)
* Return empty url for submodule tree entries (#23043) (#23048)
## [1.18.4](https://github.com/go-gitea/gitea/releases/tag/1.18.4) - 2023-02-20
* SECURITY
* Provide the ability to set password hash algorithm parameters (#22942) (#22943)
* Add command to bulk set must-change-password (#22823) (#22928)
* ENHANCEMENTS
* Use import of OCI structs (#22765) (#22805)
* Fix color of tertiary button on dark theme (#22739) (#22744)
* Link issue and pull requests status change in UI notifications directly to their event in the timelined view. (#22627) (#22642)
* BUGFIXES
* Notify on container image create (#22806) (#22965)
* Fix blame view missing lines (#22826) (#22929)
* Fix incorrect role labels for migrated issues and comments (#22914) (#22923)
* Fix PR file tree folders no longer collapsing (#22864) (#22872)
* Escape filename when assemble URL (#22850) (#22871)
* Fix isAllowed of escapeStreamer (#22814) (#22837)
* Load issue before accessing index in merge message (#22822) (#22830)
* Improve trace logging for pulls and processes (#22633) (#22812)
* Fix restore repo bug, clarify the problem of ForeignIndex (#22776) (#22794)
* Add default user visibility to cli command "admin user create" (#22750) (#22760)
* Escape path for the file list (#22741) (#22757)
* Fix bugs with WebAuthn preventing sign in and registration. (#22651) (#22721)
* Add missing close bracket in imagediff (#22710) (#22712)
* Move code comments to a standalone file and fix the bug when adding a reply to an outdated review appears to not post(#20821) (#22707)
* Fix line spacing for plaintext previews (#22699) (#22701)
* Fix wrong hint when deleting a branch successfully from pull request UI (#22673) (#22698)
* Fix README TOC links (#22577) (#22677)
* Fix missing message in git hook when pull requests disabled on fork (#22625) (#22658)
* Improve checkIfPRContentChanged (#22611) (#22644)
* Prevent duplicate labels when importing more than 99 (#22591) (#22598)
* Don't return duplicated users who can create org repo (#22560) (#22562)
* BUILD
* Upgrade golangcilint to v1.51.0 (#22764)
* MISC
* Use proxy for pull mirror (#22771) (#22772)
* Use `--index-url` in PyPi description (#22620) (#22636)
## [1.18.3](https://github.com/go-gitea/gitea/releases/tag/v1.18.3) - 2023-01-23
* SECURITY
* Prevent multiple `To` recipients (#22566) (#22569)
* BUGFIXES
* Truncate commit summary on repo files table. (#22551) (#22552)
* Mute all links in issue timeline (#22534)
## [1.18.2](https://github.com/go-gitea/gitea/releases/tag/v1.18.2) - 2023-01-19
* BUGFIXES
+2 -2
View File
@@ -28,8 +28,8 @@ XGO_VERSION := go-1.19.x
AIR_PACKAGE ?= github.com/cosmtrek/air@v1.40.4
EDITORCONFIG_CHECKER_PACKAGE ?= github.com/editorconfig-checker/editorconfig-checker/cmd/editorconfig-checker@2.5.0
ERRCHECK_PACKAGE ?= github.com/kisielk/errcheck@v1.6.1
GOFUMPT_PACKAGE ?= mvdan.cc/gofumpt@v0.3.1
GOLANGCI_LINT_PACKAGE ?= github.com/golangci/golangci-lint/cmd/golangci-lint@v1.47.0
GOFUMPT_PACKAGE ?= mvdan.cc/gofumpt@v0.4.0
GOLANGCI_LINT_PACKAGE ?= github.com/golangci/golangci-lint/cmd/golangci-lint@v1.51.0
GXZ_PAGAGE ?= github.com/ulikunitz/xz/cmd/gxz@v0.5.10
MISSPELL_PACKAGE ?= github.com/client9/misspell/cmd/misspell@v0.3.4
SWAGGER_PACKAGE ?= github.com/go-swagger/go-swagger/cmd/swagger@v0.30.0
+10
View File
File diff suppressed because one or more lines are too long
+11 -392
View File
@@ -6,7 +6,6 @@
package cmd
import (
"context"
"errors"
"fmt"
"os"
@@ -17,20 +16,14 @@ import (
auth_model "code.gitea.io/gitea/models/auth"
"code.gitea.io/gitea/models/db"
repo_model "code.gitea.io/gitea/models/repo"
user_model "code.gitea.io/gitea/models/user"
"code.gitea.io/gitea/modules/git"
"code.gitea.io/gitea/modules/graceful"
"code.gitea.io/gitea/modules/log"
pwd "code.gitea.io/gitea/modules/password"
repo_module "code.gitea.io/gitea/modules/repository"
"code.gitea.io/gitea/modules/setting"
"code.gitea.io/gitea/modules/storage"
"code.gitea.io/gitea/modules/util"
auth_service "code.gitea.io/gitea/services/auth"
"code.gitea.io/gitea/services/auth/source/oauth2"
"code.gitea.io/gitea/services/auth/source/smtp"
repo_service "code.gitea.io/gitea/services/repository"
user_service "code.gitea.io/gitea/services/user"
"github.com/urfave/cli"
)
@@ -49,142 +42,6 @@ var (
},
}
subcmdUser = cli.Command{
Name: "user",
Usage: "Modify users",
Subcommands: []cli.Command{
microcmdUserCreate,
microcmdUserList,
microcmdUserChangePassword,
microcmdUserDelete,
microcmdUserGenerateAccessToken,
},
}
microcmdUserList = cli.Command{
Name: "list",
Usage: "List users",
Action: runListUsers,
Flags: []cli.Flag{
cli.BoolFlag{
Name: "admin",
Usage: "List only admin users",
},
},
}
microcmdUserCreate = cli.Command{
Name: "create",
Usage: "Create a new user in database",
Action: runCreateUser,
Flags: []cli.Flag{
cli.StringFlag{
Name: "name",
Usage: "Username. DEPRECATED: use username instead",
},
cli.StringFlag{
Name: "username",
Usage: "Username",
},
cli.StringFlag{
Name: "password",
Usage: "User password",
},
cli.StringFlag{
Name: "email",
Usage: "User email address",
},
cli.BoolFlag{
Name: "admin",
Usage: "User is an admin",
},
cli.BoolFlag{
Name: "random-password",
Usage: "Generate a random password for the user",
},
cli.BoolFlag{
Name: "must-change-password",
Usage: "Set this option to false to prevent forcing the user to change their password after initial login, (Default: true)",
},
cli.IntFlag{
Name: "random-password-length",
Usage: "Length of the random password to be generated",
Value: 12,
},
cli.BoolFlag{
Name: "access-token",
Usage: "Generate access token for the user",
},
cli.BoolFlag{
Name: "restricted",
Usage: "Make a restricted user account",
},
},
}
microcmdUserChangePassword = cli.Command{
Name: "change-password",
Usage: "Change a user's password",
Action: runChangePassword,
Flags: []cli.Flag{
cli.StringFlag{
Name: "username,u",
Value: "",
Usage: "The user to change password for",
},
cli.StringFlag{
Name: "password,p",
Value: "",
Usage: "New password to set for user",
},
},
}
microcmdUserDelete = cli.Command{
Name: "delete",
Usage: "Delete specific user by id, name or email",
Flags: []cli.Flag{
cli.Int64Flag{
Name: "id",
Usage: "ID of user of the user to delete",
},
cli.StringFlag{
Name: "username,u",
Usage: "Username of the user to delete",
},
cli.StringFlag{
Name: "email,e",
Usage: "Email of the user to delete",
},
cli.BoolFlag{
Name: "purge",
Usage: "Purge user, all their repositories, organizations and comments",
},
},
Action: runDeleteUser,
}
microcmdUserGenerateAccessToken = cli.Command{
Name: "generate-access-token",
Usage: "Generate a access token for a specific user",
Flags: []cli.Flag{
cli.StringFlag{
Name: "username,u",
Usage: "Username",
},
cli.StringFlag{
Name: "token-name,t",
Usage: "Token name",
Value: "gitea-admin",
},
cli.BoolFlag{
Name: "raw",
Usage: "Display only the token value",
},
},
Action: runGenerateAccessToken,
}
subcmdRepoSyncReleases = cli.Command{
Name: "repo-sync-releases",
Usage: "Synchronize repository releases with tags",
@@ -304,6 +161,11 @@ var (
Value: "false",
Usage: "Use custom URLs for GitLab/GitHub OAuth endpoints",
},
cli.StringFlag{
Name: "custom-tenant-id",
Value: "",
Usage: "Use custom Tenant ID for OAuth endpoints",
},
cli.StringFlag{
Name: "custom-auth-url",
Value: "",
@@ -468,255 +330,6 @@ var (
}
)
func runChangePassword(c *cli.Context) error {
if err := argsSet(c, "username", "password"); err != nil {
return err
}
ctx, cancel := installSignals()
defer cancel()
if err := initDB(ctx); err != nil {
return err
}
if len(c.String("password")) < setting.MinPasswordLength {
return fmt.Errorf("Password is not long enough. Needs to be at least %d", setting.MinPasswordLength)
}
if !pwd.IsComplexEnough(c.String("password")) {
return errors.New("Password does not meet complexity requirements")
}
pwned, err := pwd.IsPwned(context.Background(), c.String("password"))
if err != nil {
return err
}
if pwned {
return errors.New("The password you chose is on a list of stolen passwords previously exposed in public data breaches. Please try again with a different password.\nFor more details, see https://haveibeenpwned.com/Passwords")
}
uname := c.String("username")
user, err := user_model.GetUserByName(ctx, uname)
if err != nil {
return err
}
if err = user.SetPassword(c.String("password")); err != nil {
return err
}
if err = user_model.UpdateUserCols(ctx, user, "passwd", "passwd_hash_algo", "salt"); err != nil {
return err
}
fmt.Printf("%s's password has been successfully updated!\n", user.Name)
return nil
}
func runCreateUser(c *cli.Context) error {
if err := argsSet(c, "email"); err != nil {
return err
}
if c.IsSet("name") && c.IsSet("username") {
return errors.New("Cannot set both --name and --username flags")
}
if !c.IsSet("name") && !c.IsSet("username") {
return errors.New("One of --name or --username flags must be set")
}
if c.IsSet("password") && c.IsSet("random-password") {
return errors.New("cannot set both -random-password and -password flags")
}
var username string
if c.IsSet("username") {
username = c.String("username")
} else {
username = c.String("name")
fmt.Fprintf(os.Stderr, "--name flag is deprecated. Use --username instead.\n")
}
ctx, cancel := installSignals()
defer cancel()
if err := initDB(ctx); err != nil {
return err
}
var password string
if c.IsSet("password") {
password = c.String("password")
} else if c.IsSet("random-password") {
var err error
password, err = pwd.Generate(c.Int("random-password-length"))
if err != nil {
return err
}
fmt.Printf("generated random password is '%s'\n", password)
} else {
return errors.New("must set either password or random-password flag")
}
// always default to true
changePassword := true
// If this is the first user being created.
// Take it as the admin and don't force a password update.
if n := user_model.CountUsers(nil); n == 0 {
changePassword = false
}
if c.IsSet("must-change-password") {
changePassword = c.Bool("must-change-password")
}
restricted := util.OptionalBoolNone
if c.IsSet("restricted") {
restricted = util.OptionalBoolOf(c.Bool("restricted"))
}
u := &user_model.User{
Name: username,
Email: c.String("email"),
Passwd: password,
IsAdmin: c.Bool("admin"),
MustChangePassword: changePassword,
}
overwriteDefault := &user_model.CreateUserOverwriteOptions{
IsActive: util.OptionalBoolTrue,
IsRestricted: restricted,
}
if err := user_model.CreateUser(u, overwriteDefault); err != nil {
return fmt.Errorf("CreateUser: %w", err)
}
if c.Bool("access-token") {
t := &auth_model.AccessToken{
Name: "gitea-admin",
UID: u.ID,
}
if err := auth_model.NewAccessToken(t); err != nil {
return err
}
fmt.Printf("Access token was successfully created... %s\n", t.Token)
}
fmt.Printf("New user '%s' has been successfully created!\n", username)
return nil
}
func runListUsers(c *cli.Context) error {
ctx, cancel := installSignals()
defer cancel()
if err := initDB(ctx); err != nil {
return err
}
users, err := user_model.GetAllUsers()
if err != nil {
return err
}
w := tabwriter.NewWriter(os.Stdout, 5, 0, 1, ' ', 0)
if c.IsSet("admin") {
fmt.Fprintf(w, "ID\tUsername\tEmail\tIsActive\n")
for _, u := range users {
if u.IsAdmin {
fmt.Fprintf(w, "%d\t%s\t%s\t%t\n", u.ID, u.Name, u.Email, u.IsActive)
}
}
} else {
twofa := user_model.UserList(users).GetTwoFaStatus()
fmt.Fprintf(w, "ID\tUsername\tEmail\tIsActive\tIsAdmin\t2FA\n")
for _, u := range users {
fmt.Fprintf(w, "%d\t%s\t%s\t%t\t%t\t%t\n", u.ID, u.Name, u.Email, u.IsActive, u.IsAdmin, twofa[u.ID])
}
}
w.Flush()
return nil
}
func runDeleteUser(c *cli.Context) error {
if !c.IsSet("id") && !c.IsSet("username") && !c.IsSet("email") {
return fmt.Errorf("You must provide the id, username or email of a user to delete")
}
ctx, cancel := installSignals()
defer cancel()
if err := initDB(ctx); err != nil {
return err
}
if err := storage.Init(); err != nil {
return err
}
var err error
var user *user_model.User
if c.IsSet("email") {
user, err = user_model.GetUserByEmail(c.String("email"))
} else if c.IsSet("username") {
user, err = user_model.GetUserByName(ctx, c.String("username"))
} else {
user, err = user_model.GetUserByID(c.Int64("id"))
}
if err != nil {
return err
}
if c.IsSet("username") && user.LowerName != strings.ToLower(strings.TrimSpace(c.String("username"))) {
return fmt.Errorf("The user %s who has email %s does not match the provided username %s", user.Name, c.String("email"), c.String("username"))
}
if c.IsSet("id") && user.ID != c.Int64("id") {
return fmt.Errorf("The user %s does not match the provided id %d", user.Name, c.Int64("id"))
}
return user_service.DeleteUser(ctx, user, c.Bool("purge"))
}
func runGenerateAccessToken(c *cli.Context) error {
if !c.IsSet("username") {
return fmt.Errorf("You must provide the username to generate a token for them")
}
ctx, cancel := installSignals()
defer cancel()
if err := initDB(ctx); err != nil {
return err
}
user, err := user_model.GetUserByName(ctx, c.String("username"))
if err != nil {
return err
}
t := &auth_model.AccessToken{
Name: c.String("token-name"),
UID: user.ID,
}
if err := auth_model.NewAccessToken(t); err != nil {
return err
}
if c.Bool("raw") {
fmt.Printf("%s\n", t.Token)
} else {
fmt.Printf("Access token was successfully created: %s\n", t.Token)
}
return nil
}
func runRepoSyncReleases(_ *cli.Context) error {
ctx, cancel := installSignals()
defer cancel()
@@ -814,6 +427,7 @@ func parseOAuth2Config(c *cli.Context) *oauth2.Source {
AuthURL: c.String("custom-auth-url"),
ProfileURL: c.String("custom-profile-url"),
EmailURL: c.String("custom-email-url"),
Tenant: c.String("custom-tenant-id"),
}
} else {
customURLMapping = nil
@@ -923,6 +537,7 @@ func runUpdateOauth(c *cli.Context) error {
customURLMapping.AuthURL = oAuth2Config.CustomURLMapping.AuthURL
customURLMapping.ProfileURL = oAuth2Config.CustomURLMapping.ProfileURL
customURLMapping.EmailURL = oAuth2Config.CustomURLMapping.EmailURL
customURLMapping.Tenant = oAuth2Config.CustomURLMapping.Tenant
}
if c.IsSet("use-custom-urls") && c.IsSet("custom-token-url") {
customURLMapping.TokenURL = c.String("custom-token-url")
@@ -940,6 +555,10 @@ func runUpdateOauth(c *cli.Context) error {
customURLMapping.EmailURL = c.String("custom-email-url")
}
if c.IsSet("use-custom-urls") && c.IsSet("custom-tenant-id") {
customURLMapping.Tenant = c.String("custom-tenant-id")
}
oAuth2Config.CustomURLMapping = customURLMapping
source.Cfg = oAuth2Config
+21
View File
@@ -0,0 +1,21 @@
// Copyright 2023 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package cmd
import (
"github.com/urfave/cli"
)
var subcmdUser = cli.Command{
Name: "user",
Usage: "Modify users",
Subcommands: []cli.Command{
microcmdUserCreate,
microcmdUserList,
microcmdUserChangePassword,
microcmdUserDelete,
microcmdUserGenerateAccessToken,
microcmdUserMustChangePassword,
},
}
+76
View File
@@ -0,0 +1,76 @@
// Copyright 2023 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package cmd
import (
"context"
"errors"
"fmt"
user_model "code.gitea.io/gitea/models/user"
pwd "code.gitea.io/gitea/modules/auth/password"
"code.gitea.io/gitea/modules/setting"
"github.com/urfave/cli"
)
var microcmdUserChangePassword = cli.Command{
Name: "change-password",
Usage: "Change a user's password",
Action: runChangePassword,
Flags: []cli.Flag{
cli.StringFlag{
Name: "username,u",
Value: "",
Usage: "The user to change password for",
},
cli.StringFlag{
Name: "password,p",
Value: "",
Usage: "New password to set for user",
},
},
}
func runChangePassword(c *cli.Context) error {
if err := argsSet(c, "username", "password"); err != nil {
return err
}
ctx, cancel := installSignals()
defer cancel()
if err := initDB(ctx); err != nil {
return err
}
if len(c.String("password")) < setting.MinPasswordLength {
return fmt.Errorf("Password is not long enough. Needs to be at least %d", setting.MinPasswordLength)
}
if !pwd.IsComplexEnough(c.String("password")) {
return errors.New("Password does not meet complexity requirements")
}
pwned, err := pwd.IsPwned(context.Background(), c.String("password"))
if err != nil {
return err
}
if pwned {
return errors.New("The password you chose is on a list of stolen passwords previously exposed in public data breaches. Please try again with a different password.\nFor more details, see https://haveibeenpwned.com/Passwords")
}
uname := c.String("username")
user, err := user_model.GetUserByName(ctx, uname)
if err != nil {
return err
}
if err = user.SetPassword(c.String("password")); err != nil {
return err
}
if err = user_model.UpdateUserCols(ctx, user, "passwd", "passwd_hash_algo", "salt"); err != nil {
return err
}
fmt.Printf("%s's password has been successfully updated!\n", user.Name)
return nil
}
+169
View File
@@ -0,0 +1,169 @@
// Copyright 2023 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package cmd
import (
"errors"
"fmt"
"os"
auth_model "code.gitea.io/gitea/models/auth"
user_model "code.gitea.io/gitea/models/user"
pwd "code.gitea.io/gitea/modules/auth/password"
"code.gitea.io/gitea/modules/setting"
"code.gitea.io/gitea/modules/util"
"github.com/urfave/cli"
)
var microcmdUserCreate = cli.Command{
Name: "create",
Usage: "Create a new user in database",
Action: runCreateUser,
Flags: []cli.Flag{
cli.StringFlag{
Name: "name",
Usage: "Username. DEPRECATED: use username instead",
},
cli.StringFlag{
Name: "username",
Usage: "Username",
},
cli.StringFlag{
Name: "password",
Usage: "User password",
},
cli.StringFlag{
Name: "email",
Usage: "User email address",
},
cli.BoolFlag{
Name: "admin",
Usage: "User is an admin",
},
cli.BoolFlag{
Name: "random-password",
Usage: "Generate a random password for the user",
},
cli.BoolFlag{
Name: "must-change-password",
Usage: "Set this option to false to prevent forcing the user to change their password after initial login, (Default: true)",
},
cli.IntFlag{
Name: "random-password-length",
Usage: "Length of the random password to be generated",
Value: 12,
},
cli.BoolFlag{
Name: "access-token",
Usage: "Generate access token for the user",
},
cli.BoolFlag{
Name: "restricted",
Usage: "Make a restricted user account",
},
},
}
func runCreateUser(c *cli.Context) error {
if err := argsSet(c, "email"); err != nil {
return err
}
if c.IsSet("name") && c.IsSet("username") {
return errors.New("Cannot set both --name and --username flags")
}
if !c.IsSet("name") && !c.IsSet("username") {
return errors.New("One of --name or --username flags must be set")
}
if c.IsSet("password") && c.IsSet("random-password") {
return errors.New("cannot set both -random-password and -password flags")
}
var username string
if c.IsSet("username") {
username = c.String("username")
} else {
username = c.String("name")
fmt.Fprintf(os.Stderr, "--name flag is deprecated. Use --username instead.\n")
}
ctx, cancel := installSignals()
defer cancel()
if err := initDB(ctx); err != nil {
return err
}
var password string
if c.IsSet("password") {
password = c.String("password")
} else if c.IsSet("random-password") {
var err error
password, err = pwd.Generate(c.Int("random-password-length"))
if err != nil {
return err
}
fmt.Printf("generated random password is '%s'\n", password)
} else {
return errors.New("must set either password or random-password flag")
}
// always default to true
changePassword := true
// If this is the first user being created.
// Take it as the admin and don't force a password update.
if n := user_model.CountUsers(nil); n == 0 {
changePassword = false
}
if c.IsSet("must-change-password") {
changePassword = c.Bool("must-change-password")
}
restricted := util.OptionalBoolNone
if c.IsSet("restricted") {
restricted = util.OptionalBoolOf(c.Bool("restricted"))
}
// default user visibility in app.ini
visibility := setting.Service.DefaultUserVisibilityMode
u := &user_model.User{
Name: username,
Email: c.String("email"),
Passwd: password,
IsAdmin: c.Bool("admin"),
MustChangePassword: changePassword,
Visibility: visibility,
}
overwriteDefault := &user_model.CreateUserOverwriteOptions{
IsActive: util.OptionalBoolTrue,
IsRestricted: restricted,
}
if err := user_model.CreateUser(u, overwriteDefault); err != nil {
return fmt.Errorf("CreateUser: %w", err)
}
if c.Bool("access-token") {
t := &auth_model.AccessToken{
Name: "gitea-admin",
UID: u.ID,
}
if err := auth_model.NewAccessToken(t); err != nil {
return err
}
fmt.Printf("Access token was successfully created... %s\n", t.Token)
}
fmt.Printf("New user '%s' has been successfully created!\n", username)
return nil
}
+78
View File
@@ -0,0 +1,78 @@
// Copyright 2023 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package cmd
import (
"fmt"
"strings"
user_model "code.gitea.io/gitea/models/user"
"code.gitea.io/gitea/modules/storage"
user_service "code.gitea.io/gitea/services/user"
"github.com/urfave/cli"
)
var microcmdUserDelete = cli.Command{
Name: "delete",
Usage: "Delete specific user by id, name or email",
Flags: []cli.Flag{
cli.Int64Flag{
Name: "id",
Usage: "ID of user of the user to delete",
},
cli.StringFlag{
Name: "username,u",
Usage: "Username of the user to delete",
},
cli.StringFlag{
Name: "email,e",
Usage: "Email of the user to delete",
},
cli.BoolFlag{
Name: "purge",
Usage: "Purge user, all their repositories, organizations and comments",
},
},
Action: runDeleteUser,
}
func runDeleteUser(c *cli.Context) error {
if !c.IsSet("id") && !c.IsSet("username") && !c.IsSet("email") {
return fmt.Errorf("You must provide the id, username or email of a user to delete")
}
ctx, cancel := installSignals()
defer cancel()
if err := initDB(ctx); err != nil {
return err
}
if err := storage.Init(); err != nil {
return err
}
var err error
var user *user_model.User
if c.IsSet("email") {
user, err = user_model.GetUserByEmail(c.String("email"))
} else if c.IsSet("username") {
user, err = user_model.GetUserByName(ctx, c.String("username"))
} else {
user, err = user_model.GetUserByID(c.Int64("id"))
}
if err != nil {
return err
}
if c.IsSet("username") && user.LowerName != strings.ToLower(strings.TrimSpace(c.String("username"))) {
return fmt.Errorf("The user %s who has email %s does not match the provided username %s", user.Name, c.String("email"), c.String("username"))
}
if c.IsSet("id") && user.ID != c.Int64("id") {
return fmt.Errorf("The user %s does not match the provided id %d", user.Name, c.Int64("id"))
}
return user_service.DeleteUser(ctx, user, c.Bool("purge"))
}
+69
View File
@@ -0,0 +1,69 @@
// Copyright 2023 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package cmd
import (
"fmt"
auth_model "code.gitea.io/gitea/models/auth"
user_model "code.gitea.io/gitea/models/user"
"github.com/urfave/cli"
)
var microcmdUserGenerateAccessToken = cli.Command{
Name: "generate-access-token",
Usage: "Generate an access token for a specific user",
Flags: []cli.Flag{
cli.StringFlag{
Name: "username,u",
Usage: "Username",
},
cli.StringFlag{
Name: "token-name,t",
Usage: "Token name",
Value: "gitea-admin",
},
cli.BoolFlag{
Name: "raw",
Usage: "Display only the token value",
},
},
Action: runGenerateAccessToken,
}
func runGenerateAccessToken(c *cli.Context) error {
if !c.IsSet("username") {
return fmt.Errorf("You must provide a username to generate a token for")
}
ctx, cancel := installSignals()
defer cancel()
if err := initDB(ctx); err != nil {
return err
}
user, err := user_model.GetUserByName(ctx, c.String("username"))
if err != nil {
return err
}
t := &auth_model.AccessToken{
Name: c.String("token-name"),
UID: user.ID,
}
if err := auth_model.NewAccessToken(t); err != nil {
return err
}
if c.Bool("raw") {
fmt.Printf("%s\n", t.Token)
} else {
fmt.Printf("Access token was successfully created: %s\n", t.Token)
}
return nil
}
+60
View File
@@ -0,0 +1,60 @@
// Copyright 2023 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package cmd
import (
"fmt"
"os"
"text/tabwriter"
user_model "code.gitea.io/gitea/models/user"
"github.com/urfave/cli"
)
var microcmdUserList = cli.Command{
Name: "list",
Usage: "List users",
Action: runListUsers,
Flags: []cli.Flag{
cli.BoolFlag{
Name: "admin",
Usage: "List only admin users",
},
},
}
func runListUsers(c *cli.Context) error {
ctx, cancel := installSignals()
defer cancel()
if err := initDB(ctx); err != nil {
return err
}
users, err := user_model.GetAllUsers()
if err != nil {
return err
}
w := tabwriter.NewWriter(os.Stdout, 5, 0, 1, ' ', 0)
if c.IsSet("admin") {
fmt.Fprintf(w, "ID\tUsername\tEmail\tIsActive\n")
for _, u := range users {
if u.IsAdmin {
fmt.Fprintf(w, "%d\t%s\t%s\t%t\n", u.ID, u.Name, u.Email, u.IsActive)
}
}
} else {
twofa := user_model.UserList(users).GetTwoFaStatus()
fmt.Fprintf(w, "ID\tUsername\tEmail\tIsActive\tIsAdmin\t2FA\n")
for _, u := range users {
fmt.Fprintf(w, "%d\t%s\t%s\t%t\t%t\t%t\n", u.ID, u.Name, u.Email, u.IsActive, u.IsAdmin, twofa[u.ID])
}
}
w.Flush()
return nil
}
+58
View File
@@ -0,0 +1,58 @@
// Copyright 2023 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package cmd
import (
"errors"
"fmt"
user_model "code.gitea.io/gitea/models/user"
"github.com/urfave/cli"
)
var microcmdUserMustChangePassword = cli.Command{
Name: "must-change-password",
Usage: "Set the must change password flag for the provided users or all users",
Action: runMustChangePassword,
Flags: []cli.Flag{
cli.BoolFlag{
Name: "all,A",
Usage: "All users must change password, except those explicitly excluded with --exclude",
},
cli.StringSliceFlag{
Name: "exclude,e",
Usage: "Do not change the must-change-password flag for these users",
},
cli.BoolFlag{
Name: "unset",
Usage: "Instead of setting the must-change-password flag, unset it",
},
},
}
func runMustChangePassword(c *cli.Context) error {
ctx, cancel := installSignals()
defer cancel()
if c.NArg() == 0 && !c.IsSet("all") {
return errors.New("either usernames or --all must be provided")
}
mustChangePassword := !c.Bool("unset")
all := c.Bool("all")
exclude := c.StringSlice("exclude")
if err := initDB(ctx); err != nil {
return err
}
n, err := user_model.SetMustChangePassword(ctx, all, mustChangePassword, c.Args(), exclude)
if err != nil {
return err
}
fmt.Printf("Updated %d users setting MustChangePassword to %t\n", n, mustChangePassword)
return nil
}
+15 -10
View File
@@ -12,6 +12,7 @@ import (
"net/url"
"os"
"os/exec"
"path/filepath"
"regexp"
"strconv"
"strings"
@@ -290,17 +291,21 @@ func runServ(c *cli.Context) error {
return nil
}
// Special handle for Windows.
if setting.IsWindows {
verb = strings.Replace(verb, "-", " ", 1)
}
var gitcmd *exec.Cmd
verbs := strings.Split(verb, " ")
if len(verbs) == 2 {
gitcmd = exec.CommandContext(ctx, verbs[0], verbs[1], repoPath)
} else {
gitcmd = exec.CommandContext(ctx, verb, repoPath)
gitBinPath := filepath.Dir(git.GitExecutable) // e.g. /usr/bin
gitBinVerb := filepath.Join(gitBinPath, verb) // e.g. /usr/bin/git-upload-pack
if _, err := os.Stat(gitBinVerb); err != nil {
// if the command "git-upload-pack" doesn't exist, try to split "git-upload-pack" to use the sub-command with git
// ps: Windows only has "git.exe" in the bin path, so Windows always uses this way
verbFields := strings.SplitN(verb, "-", 2)
if len(verbFields) == 2 {
// use git binary with the sub-command part: "C:\...\bin\git.exe", "upload-pack", ...
gitcmd = exec.CommandContext(ctx, git.GitExecutable, verbFields[1], repoPath)
}
}
if gitcmd == nil {
// by default, use the verb (it has been checked above by allowedCommands)
gitcmd = exec.CommandContext(ctx, gitBinVerb, repoPath)
}
process.SetSysProcAttribute(gitcmd)
@@ -523,7 +523,21 @@ Certain queues have defaults that override the defaults set in `[queue]` (this o
- `IMPORT_LOCAL_PATHS`: **false**: Set to `false` to prevent all users (including admin) from importing local path on server.
- `INTERNAL_TOKEN`: **\<random at every install if no uri set\>**: Secret used to validate communication within Gitea binary.
- `INTERNAL_TOKEN_URI`: **<empty>**: Instead of defining INTERNAL_TOKEN in the configuration, this configuration option can be used to give Gitea a path to a file that contains the internal token (example value: `file:/etc/gitea/internal_token`)
- `PASSWORD_HASH_ALGO`: **pbkdf2**: The hash algorithm to use \[argon2, pbkdf2, scrypt, bcrypt\], argon2 will spend more memory than others.
- `PASSWORD_HASH_ALGO`: **pbkdf2**: The hash algorithm to use \[argon2, pbkdf2, pbkdf2_v1, scrypt, bcrypt\], argon2 and scrypt will spend significant amounts of memory.
- Note: The default parameters for `pbkdf2` hashing have changed - the previous settings are available as `pbkdf2_v1` but are not recommended.
- The hash functions may be tuned by using `$` after the algorithm:
- `argon2$<time>$<memory>$<threads>$<key-length>`
- `bcrypt$<cost>`
- `pbkdf2$<iterations>$<key-length>`
- `scrypt$<n>$<r>$<p>$<key-length>`
- The defaults are:
- `argon2`: `argon2$2$65536$8$50`
- `bcrypt`: `bcrypt$10`
- `pbkdf2`: `pbkdf2$320000$50`
- `pbkdf2_v1`: `pbkdf2$10000$50`
- `pbkdf2_v2`: `pbkdf2$320000$50`
- `scrypt`: `scrypt$65536$16$2$50`
- Adjusting the algorithm parameters using this functionality is done at your own risk.
- `CSRF_COOKIE_HTTP_ONLY`: **true**: Set false to allow JavaScript to read CSRF cookie.
- `MIN_PASSWORD_LENGTH`: **6**: Minimum password length for new users.
- `PASSWORD_COMPLEXITY`: **off**: Comma separated list of character classes required to pass minimum complexity. If left empty or no valid values are specified, checking is disabled (off):
@@ -7,7 +7,7 @@ toc: false
draft: false
menu:
sidebar:
parent: "advanced"
parent: "developers"
name: "加入 Gitea 开源"
weight: 10
identifier: "hacking-on-gitea"
@@ -15,7 +15,7 @@ menu:
# Hacking on Gitea
首先你需要一些运行环境,这和 [从源代码安装]({{< relref "from-source.zh-cn.md" >}}) 相同,如果你还没有设置好,可以先阅读那个章节。
首先你需要一些运行环境,这和 [从源代码安装]({{< relref "doc/installation/from-source.zh-cn.md" >}}) 相同,如果你还没有设置好,可以先阅读那个章节。
如果你想为 Gitea 贡献代码,你需要 Fork 这个项目并且以 `master` 为开发分支。Gitea 使用 Govendor
来管理依赖,因此所有依赖项都被工具自动 copy 在 vendor 子目录下。用下面的命令来下载源码:
@@ -32,4 +32,4 @@ chmod +x gitea
## 需要帮助?
如果从本页中没有找到你需要的内容,请访问 [帮助页面]({{< relref "seek-help.zh-cn.md" >}})
如果从本页中没有找到你需要的内容,请访问 [帮助页面]({{< relref "doc/help/seek-help.zh-cn.md" >}})
@@ -64,11 +64,11 @@ OpenSUSE 构建服务为 [openSUSE 和 SLE](https://software.opensuse.org/downlo
choco install gitea
```
你也可以 [从二进制安装]({{< relref "from-binary.zh-cn.md" >}}) 。
你也可以 [从二进制安装]({{< relref "doc/installation/from-binary.zh-cn.md" >}}) 。
## macOS
macOS 平台下当前我们仅支持通过 `brew` 来安装。如果你没有安装 [Homebrew](http://brew.sh/),你也可以查看 [从二进制安装]({{< relref "from-binary.zh-cn.md" >}})。在你安装了 `brew` 之后, 你可以执行以下命令:
macOS 平台下当前我们仅支持通过 `brew` 来安装。如果你没有安装 [Homebrew](http://brew.sh/),你也可以查看 [从二进制安装]({{< relref "doc/installation/from-binary.zh-cn.md" >}})。在你安装了 `brew` 之后, 你可以执行以下命令:
```
brew tap gitea/tap https://gitea.com/gitea/homebrew-gitea
@@ -105,4 +105,4 @@ make install clean
## 需要帮助?
如果从本页中没有找到你需要的内容,请访问 [帮助页面]({{< relref "seek-help.zh-cn.md" >}})
如果从本页中没有找到你需要的内容,请访问 [帮助页面]({{< relref "doc/help/seek-help.zh-cn.md" >}})
@@ -54,7 +54,7 @@ git checkout v{{< version >}}
- `go` {{< min-go-version >}} 或以上版本, 详见[这里](https://golang.google.cn/doc/install)
- `node` {{< min-node-version >}} 或以上版本,并且安装 `npm`, 详见[这里](https://nodejs.org/zh-cn/download/)
- `make`, 详见[这里]({{< relref "make.zh-cn.md" >}})</a>
- `make`, 详见[这里]({{< relref "doc/advanced/make.zh-cn.md" >}})
各种可用的 [make 任务](https://github.com/go-gitea/gitea/blob/main/Makefile)
可以用来使编译过程更方便。
@@ -104,4 +104,4 @@ CC=aarch64-unknown-linux-gnu-gcc GOOS=linux GOARCH=arm64 TAGS="bindata sqlite sq
## 需要帮助?
如果从本页中没有找到你需要的内容,请访问 [帮助页面]({{< relref "seek-help.zh-cn.md" >}})
如果从本页中没有找到你需要的内容,请访问 [帮助页面]({{< relref "doc/help/seek-help.zh-cn.md" >}})
+2
View File
@@ -77,6 +77,8 @@ For example:
pip install --index-url https://testuser:password123@gitea.example.com/api/packages/testuser/pypi/simple --no-deps test_package
```
You can use `--extra-index-url` instead of `--index-url` but that makes you vulnerable to dependency confusion attacks because `pip` checks the official PyPi repository for the package before it checks the specified custom repository. Read the `pip` docs for more information.
## Supported commands
```
@@ -99,6 +99,13 @@ Admin operations:
- `--password value`, `-p value`: New password. Required.
- Examples:
- `gitea admin user change-password --username myname --password asecurepassword`
- `must-change-password`:
- Args:
- `[username...]`: Users that must change their passwords
- Options:
- `--all`, `-A`: Force a password change for all users
- `--exclude username`, `-e username`: Exclude the given user. Can be set multiple times.
- `--unset`: Revoke forced password change for the given users
- `regenerate`
- Options:
- `hooks`: Regenerate Git Hooks for all repositories
@@ -124,6 +131,7 @@ Admin operations:
- `--secret`: Client Secret.
- `--auto-discover-url`: OpenID Connect Auto Discovery URL (only required when using OpenID Connect as provider).
- `--use-custom-urls`: Use custom URLs for GitLab/GitHub OAuth endpoints.
- `--custom-tenant-id`: Use custom Tenant ID for OAuth endpoints.
- `--custom-auth-url`: Use a custom Authorization URL (option for GitLab/GitHub).
- `--custom-token-url`: Use a custom Token URL (option for GitLab/GitHub).
- `--custom-profile-url`: Use a custom Profile URL (option for GitLab/GitHub).
@@ -147,6 +155,7 @@ Admin operations:
- `--secret`: Client Secret.
- `--auto-discover-url`: OpenID Connect Auto Discovery URL (only required when using OpenID Connect as provider).
- `--use-custom-urls`: Use custom URLs for GitLab/GitHub OAuth endpoints.
- `--custom-tenant-id`: Use custom Tenant ID for OAuth endpoints.
- `--custom-auth-url`: Use a custom Authorization URL (option for GitLab/GitHub).
- `--custom-token-url`: Use a custom Token URL (option for GitLab/GitHub).
- `--custom-profile-url`: Use a custom Profile URL (option for GitLab/GitHub).
+1 -1
View File
@@ -70,4 +70,4 @@ Gitea的首要目标是创建一个极易安装运行非常快速安装和
## 需要帮助?
如果从本页中没有找到你需要的内容,请访问 [帮助页面]({{< relref "seek-help.zh-cn.md" >}})
如果从本页中没有找到你需要的内容,请访问 [帮助页面]({{< relref "doc/help/seek-help.zh-cn.md" >}})
+3
View File
@@ -75,6 +75,8 @@ require (
github.com/niklasfasching/go-org v1.6.5
github.com/oliamb/cutter v0.2.2
github.com/olivere/elastic/v7 v7.0.32
github.com/opencontainers/go-digest v1.0.0
github.com/opencontainers/image-spec v1.1.0-rc2
github.com/pkg/errors v0.9.1
github.com/pquerna/otp v1.3.0
github.com/prometheus/client_golang v1.13.0
@@ -285,6 +287,7 @@ require (
go.uber.org/multierr v1.8.0 // indirect
go.uber.org/zap v1.23.0 // indirect
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4 // indirect
golang.org/x/sync v0.0.0-20220819030929-7fc1605a5dde // indirect
golang.org/x/time v0.0.0-20220922220347-f3bd1da661af // indirect
google.golang.org/appengine v1.6.7 // indirect
google.golang.org/genproto v0.0.0-20220616135557-88e70c0c3a90 // indirect
+6 -1
View File
@@ -1174,6 +1174,10 @@ github.com/onsi/gomega v1.10.1/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1y
github.com/onsi/gomega v1.10.3/go.mod h1:V9xEwhxec5O8UDM77eCW8vLymOMltsqPVYWrpDsH8xc=
github.com/onsi/gomega v1.18.1 h1:M1GfJqGRrBrrGGsbxzV5dqM2U2ApXefZCQpkukxYRLE=
github.com/op/go-logging v0.0.0-20160315200505-970db520ece7/go.mod h1:HzydrMdWErDVzsI23lYNej1Htcns9BCg93Dk0bBINWk=
github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
github.com/opencontainers/image-spec v1.1.0-rc2 h1:2zx/Stx4Wc5pIPDvIxHXvXtQFW/7XWJGmnM7r3wg034=
github.com/opencontainers/image-spec v1.1.0-rc2/go.mod h1:3OVijpioIKYWTqjiG0zfF6wvoJ4fAXGbjdZuI2NgsRQ=
github.com/opentracing-contrib/go-observer v0.0.0-20170622124052-a52f23424492/go.mod h1:Ngi6UdF0k5OKD5t5wlmGhe/EDKPoUM3BXZSSfIuJbis=
github.com/opentracing/basictracer-go v1.0.0/go.mod h1:QfBfYuafItcjQuMwinw9GhYKwFXS9KnPs5lxoYwgW74=
github.com/opentracing/opentracing-go v1.0.2/go.mod h1:UkNAQd3GIcIGf0SeVgPpRdFStlNbqXla1AfSYxPUl2o=
@@ -1759,7 +1763,8 @@ golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJ
golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220601150217-0de741cfad7f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4 h1:uVc8UZUe6tr40fFVnUP5Oj+veunVezqYl9z7DYw9xzw=
golang.org/x/sync v0.0.0-20220819030929-7fc1605a5dde h1:ejfdSekXMDxDLbRrJMwUk6KnSLZ2McaUCVcIKM+N6jc=
golang.org/x/sync v0.0.0-20220819030929-7fc1605a5dde/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
+1 -1
View File
@@ -157,7 +157,7 @@ func CreateRepoTransferNotification(doer, newOwner *user_model.User, repo *repo_
}
for i := range users {
notify = append(notify, &Notification{
UserID: users[i].ID,
UserID: i,
RepoID: repo.ID,
Status: NotificationStatusUnread,
UpdatedBy: doer.ID,
+4 -2
View File
@@ -24,8 +24,10 @@ type contextKey struct {
}
// enginedContextKey is a context key. It is used with context.Value() to get the current Engined for the context
var enginedContextKey = &contextKey{"engined"}
var _ Engined = &Context{}
var (
enginedContextKey = &contextKey{"engined"}
_ Engined = &Context{}
)
// Context represents a db context
type Context struct {
+49 -1
View File
@@ -5,8 +5,11 @@
package db
import (
"context"
"code.gitea.io/gitea/modules/setting"
"xorm.io/builder"
"xorm.io/xorm"
)
@@ -19,6 +22,7 @@ const (
type Paginator interface {
GetSkipTake() (skip, take int)
GetStartEnd() (start, end int)
IsListAll() bool
}
// GetPaginatedSession creates a paginated database session
@@ -45,9 +49,12 @@ func SetEnginePagination(e Engine, p Paginator) Engine {
// ListOptions options to paginate results
type ListOptions struct {
PageSize int
Page int // start from 1
Page int // start from 1
ListAll bool // if true, then PageSize and Page will not be taken
}
var _ Paginator = &ListOptions{}
// GetSkipTake returns the skip and take values
func (opts *ListOptions) GetSkipTake() (skip, take int) {
opts.SetDefaultValues()
@@ -61,6 +68,11 @@ func (opts *ListOptions) GetStartEnd() (start, end int) {
return start, end
}
// IsListAll indicates PageSize and Page will be ignored
func (opts *ListOptions) IsListAll() bool {
return opts.ListAll
}
// SetDefaultValues sets default values
func (opts *ListOptions) SetDefaultValues() {
if opts.PageSize <= 0 {
@@ -80,6 +92,8 @@ type AbsoluteListOptions struct {
take int
}
var _ Paginator = &AbsoluteListOptions{}
// NewAbsoluteListOptions creates a list option with applied limits
func NewAbsoluteListOptions(skip, take int) *AbsoluteListOptions {
if skip < 0 {
@@ -94,6 +108,11 @@ func NewAbsoluteListOptions(skip, take int) *AbsoluteListOptions {
return &AbsoluteListOptions{skip, take}
}
// IsListAll will always return false
func (opts *AbsoluteListOptions) IsListAll() bool {
return false
}
// GetSkipTake returns the skip and take values
func (opts *AbsoluteListOptions) GetSkipTake() (skip, take int) {
return opts.skip, opts.take
@@ -103,3 +122,32 @@ func (opts *AbsoluteListOptions) GetSkipTake() (skip, take int) {
func (opts *AbsoluteListOptions) GetStartEnd() (start, end int) {
return opts.skip, opts.skip + opts.take
}
// FindOptions represents a find options
type FindOptions interface {
Paginator
ToConds() builder.Cond
}
// Find represents a common find function which accept an options interface
func Find[T any](ctx context.Context, opts FindOptions, objects *[]T) error {
sess := GetEngine(ctx).Where(opts.ToConds())
if !opts.IsListAll() {
sess.Limit(opts.GetSkipTake())
}
return sess.Find(&objects)
}
// Count represents a common count function which accept an options interface
func Count[T any](ctx context.Context, opts FindOptions, object T) (int64, error) {
return GetEngine(ctx).Where(opts.ToConds()).Count(object)
}
// FindAndCount represents a common findandcount function which accept an options interface
func FindAndCount[T any](ctx context.Context, opts FindOptions, objects *[]T) (int64, error) {
sess := GetEngine(ctx).Where(opts.ToConds())
if !opts.IsListAll() {
sess.Limit(opts.GetSkipTake())
}
return sess.FindAndCount(&objects)
}
+13
View File
@@ -544,3 +544,16 @@
repo_id: 51
type: 2
created_unix: 946684810
-
id: 80
repo_id: 31
type: 1
created_unix: 946684810
-
id: 81
repo_id: 31
type: 3
config: "{\"IgnoreWhitespaceConflicts\":false,\"AllowMerge\":true,\"AllowRebase\":true,\"AllowRebaseMerge\":true,\"AllowSquash\":true}"
created_unix: 946684810
+11
View File
@@ -140,3 +140,14 @@
num_members: 1
includes_all_repositories: false
can_create_org_repo: false
-
id: 14
org_id: 3
lower_name: teamcreaterepo
name: teamCreateRepo
authorize: 2 # write
num_repos: 0
num_members: 1
includes_all_repositories: false
can_create_org_repo: true
+6
View File
@@ -93,3 +93,9 @@
org_id: 19
team_id: 6
uid: 31
-
id: 17
org_id: 3
team_id: 14
uid: 2
+66 -66
View File
@@ -8,8 +8,8 @@
email: user1@example.com
keep_email_private: false
email_notifications_preference: enabled
passwd: a3d5fcd92bae586c2e3dbe72daea7a0d27833a8d0227aa1704f4bbd775c1f3b03535b76dd93b0d4d8d22a519dca47df1547b
passwd_hash_algo: argon2
passwd: e82bc8ae42a53b98c3bd0f941aacc4aa2a264407534b0a11bf270137f67af912f694b67951f92148c45f91717e1478ca7889
passwd_hash_algo: pbkdf2$50000$50
must_change_password: false
login_source: 0
login_name: user1
@@ -45,8 +45,8 @@
email: user2@example.com
keep_email_private: true
email_notifications_preference: enabled
passwd: a3d5fcd92bae586c2e3dbe72daea7a0d27833a8d0227aa1704f4bbd775c1f3b03535b76dd93b0d4d8d22a519dca47df1547b
passwd_hash_algo: argon2
passwd: e82bc8ae42a53b98c3bd0f941aacc4aa2a264407534b0a11bf270137f67af912f694b67951f92148c45f91717e1478ca7889
passwd_hash_algo: pbkdf2$50000$50
must_change_password: false
login_source: 0
login_name: user2
@@ -82,8 +82,8 @@
email: user3@example.com
keep_email_private: false
email_notifications_preference: onmention
passwd: a3d5fcd92bae586c2e3dbe72daea7a0d27833a8d0227aa1704f4bbd775c1f3b03535b76dd93b0d4d8d22a519dca47df1547b
passwd_hash_algo: argon2
passwd: e82bc8ae42a53b98c3bd0f941aacc4aa2a264407534b0a11bf270137f67af912f694b67951f92148c45f91717e1478ca7889
passwd_hash_algo: pbkdf2$50000$50
must_change_password: false
login_source: 0
login_name: user3
@@ -104,7 +104,7 @@
num_following: 0
num_stars: 0
num_repos: 3
num_teams: 4
num_teams: 5
num_members: 3
visibility: 0
repo_admin_change_team_access: false
@@ -119,8 +119,8 @@
email: user4@example.com
keep_email_private: false
email_notifications_preference: onmention
passwd: a3d5fcd92bae586c2e3dbe72daea7a0d27833a8d0227aa1704f4bbd775c1f3b03535b76dd93b0d4d8d22a519dca47df1547b
passwd_hash_algo: argon2
passwd: e82bc8ae42a53b98c3bd0f941aacc4aa2a264407534b0a11bf270137f67af912f694b67951f92148c45f91717e1478ca7889
passwd_hash_algo: pbkdf2$50000$50
must_change_password: false
login_source: 0
login_name: user4
@@ -156,8 +156,8 @@
email: user5@example.com
keep_email_private: false
email_notifications_preference: enabled
passwd: a3d5fcd92bae586c2e3dbe72daea7a0d27833a8d0227aa1704f4bbd775c1f3b03535b76dd93b0d4d8d22a519dca47df1547b
passwd_hash_algo: argon2
passwd: e82bc8ae42a53b98c3bd0f941aacc4aa2a264407534b0a11bf270137f67af912f694b67951f92148c45f91717e1478ca7889
passwd_hash_algo: pbkdf2$50000$50
must_change_password: false
login_source: 0
login_name: user5
@@ -193,8 +193,8 @@
email: user6@example.com
keep_email_private: false
email_notifications_preference: enabled
passwd: a3d5fcd92bae586c2e3dbe72daea7a0d27833a8d0227aa1704f4bbd775c1f3b03535b76dd93b0d4d8d22a519dca47df1547b
passwd_hash_algo: argon2
passwd: e82bc8ae42a53b98c3bd0f941aacc4aa2a264407534b0a11bf270137f67af912f694b67951f92148c45f91717e1478ca7889
passwd_hash_algo: pbkdf2$50000$50
must_change_password: false
login_source: 0
login_name: user6
@@ -230,8 +230,8 @@
email: user7@example.com
keep_email_private: false
email_notifications_preference: disabled
passwd: a3d5fcd92bae586c2e3dbe72daea7a0d27833a8d0227aa1704f4bbd775c1f3b03535b76dd93b0d4d8d22a519dca47df1547b
passwd_hash_algo: argon2
passwd: e82bc8ae42a53b98c3bd0f941aacc4aa2a264407534b0a11bf270137f67af912f694b67951f92148c45f91717e1478ca7889
passwd_hash_algo: pbkdf2$50000$50
must_change_password: false
login_source: 0
login_name: user7
@@ -267,8 +267,8 @@
email: user8@example.com
keep_email_private: false
email_notifications_preference: enabled
passwd: a3d5fcd92bae586c2e3dbe72daea7a0d27833a8d0227aa1704f4bbd775c1f3b03535b76dd93b0d4d8d22a519dca47df1547b
passwd_hash_algo: argon2
passwd: e82bc8ae42a53b98c3bd0f941aacc4aa2a264407534b0a11bf270137f67af912f694b67951f92148c45f91717e1478ca7889
passwd_hash_algo: pbkdf2$50000$50
must_change_password: false
login_source: 0
login_name: user8
@@ -304,8 +304,8 @@
email: user9@example.com
keep_email_private: false
email_notifications_preference: onmention
passwd: a3d5fcd92bae586c2e3dbe72daea7a0d27833a8d0227aa1704f4bbd775c1f3b03535b76dd93b0d4d8d22a519dca47df1547b
passwd_hash_algo: argon2
passwd: e82bc8ae42a53b98c3bd0f941aacc4aa2a264407534b0a11bf270137f67af912f694b67951f92148c45f91717e1478ca7889
passwd_hash_algo: pbkdf2$50000$50
must_change_password: false
login_source: 0
login_name: user9
@@ -341,8 +341,8 @@
email: user10@example.com
keep_email_private: false
email_notifications_preference: enabled
passwd: a3d5fcd92bae586c2e3dbe72daea7a0d27833a8d0227aa1704f4bbd775c1f3b03535b76dd93b0d4d8d22a519dca47df1547b
passwd_hash_algo: argon2
passwd: e82bc8ae42a53b98c3bd0f941aacc4aa2a264407534b0a11bf270137f67af912f694b67951f92148c45f91717e1478ca7889
passwd_hash_algo: pbkdf2$50000$50
must_change_password: false
login_source: 0
login_name: user10
@@ -378,8 +378,8 @@
email: user11@example.com
keep_email_private: false
email_notifications_preference: enabled
passwd: a3d5fcd92bae586c2e3dbe72daea7a0d27833a8d0227aa1704f4bbd775c1f3b03535b76dd93b0d4d8d22a519dca47df1547b
passwd_hash_algo: argon2
passwd: e82bc8ae42a53b98c3bd0f941aacc4aa2a264407534b0a11bf270137f67af912f694b67951f92148c45f91717e1478ca7889
passwd_hash_algo: pbkdf2$50000$50
must_change_password: false
login_source: 0
login_name: user11
@@ -415,8 +415,8 @@
email: user12@example.com
keep_email_private: false
email_notifications_preference: enabled
passwd: a3d5fcd92bae586c2e3dbe72daea7a0d27833a8d0227aa1704f4bbd775c1f3b03535b76dd93b0d4d8d22a519dca47df1547b
passwd_hash_algo: argon2
passwd: e82bc8ae42a53b98c3bd0f941aacc4aa2a264407534b0a11bf270137f67af912f694b67951f92148c45f91717e1478ca7889
passwd_hash_algo: pbkdf2$50000$50
must_change_password: false
login_source: 0
login_name: user12
@@ -452,8 +452,8 @@
email: user13@example.com
keep_email_private: false
email_notifications_preference: enabled
passwd: a3d5fcd92bae586c2e3dbe72daea7a0d27833a8d0227aa1704f4bbd775c1f3b03535b76dd93b0d4d8d22a519dca47df1547b
passwd_hash_algo: argon2
passwd: e82bc8ae42a53b98c3bd0f941aacc4aa2a264407534b0a11bf270137f67af912f694b67951f92148c45f91717e1478ca7889
passwd_hash_algo: pbkdf2$50000$50
must_change_password: false
login_source: 0
login_name: user13
@@ -489,8 +489,8 @@
email: user14@example.com
keep_email_private: false
email_notifications_preference: enabled
passwd: a3d5fcd92bae586c2e3dbe72daea7a0d27833a8d0227aa1704f4bbd775c1f3b03535b76dd93b0d4d8d22a519dca47df1547b
passwd_hash_algo: argon2
passwd: e82bc8ae42a53b98c3bd0f941aacc4aa2a264407534b0a11bf270137f67af912f694b67951f92148c45f91717e1478ca7889
passwd_hash_algo: pbkdf2$50000$50
must_change_password: false
login_source: 0
login_name: user14
@@ -526,8 +526,8 @@
email: user15@example.com
keep_email_private: false
email_notifications_preference: enabled
passwd: a3d5fcd92bae586c2e3dbe72daea7a0d27833a8d0227aa1704f4bbd775c1f3b03535b76dd93b0d4d8d22a519dca47df1547b
passwd_hash_algo: argon2
passwd: e82bc8ae42a53b98c3bd0f941aacc4aa2a264407534b0a11bf270137f67af912f694b67951f92148c45f91717e1478ca7889
passwd_hash_algo: pbkdf2$50000$50
must_change_password: false
login_source: 0
login_name: user15
@@ -563,8 +563,8 @@
email: user16@example.com
keep_email_private: false
email_notifications_preference: enabled
passwd: a3d5fcd92bae586c2e3dbe72daea7a0d27833a8d0227aa1704f4bbd775c1f3b03535b76dd93b0d4d8d22a519dca47df1547b
passwd_hash_algo: argon2
passwd: e82bc8ae42a53b98c3bd0f941aacc4aa2a264407534b0a11bf270137f67af912f694b67951f92148c45f91717e1478ca7889
passwd_hash_algo: pbkdf2$50000$50
must_change_password: false
login_source: 0
login_name: user16
@@ -600,8 +600,8 @@
email: user17@example.com
keep_email_private: false
email_notifications_preference: enabled
passwd: a3d5fcd92bae586c2e3dbe72daea7a0d27833a8d0227aa1704f4bbd775c1f3b03535b76dd93b0d4d8d22a519dca47df1547b
passwd_hash_algo: argon2
passwd: e82bc8ae42a53b98c3bd0f941aacc4aa2a264407534b0a11bf270137f67af912f694b67951f92148c45f91717e1478ca7889
passwd_hash_algo: pbkdf2$50000$50
must_change_password: false
login_source: 0
login_name: user17
@@ -637,8 +637,8 @@
email: user18@example.com
keep_email_private: false
email_notifications_preference: enabled
passwd: a3d5fcd92bae586c2e3dbe72daea7a0d27833a8d0227aa1704f4bbd775c1f3b03535b76dd93b0d4d8d22a519dca47df1547b
passwd_hash_algo: argon2
passwd: e82bc8ae42a53b98c3bd0f941aacc4aa2a264407534b0a11bf270137f67af912f694b67951f92148c45f91717e1478ca7889
passwd_hash_algo: pbkdf2$50000$50
must_change_password: false
login_source: 0
login_name: user18
@@ -674,8 +674,8 @@
email: user19@example.com
keep_email_private: false
email_notifications_preference: enabled
passwd: a3d5fcd92bae586c2e3dbe72daea7a0d27833a8d0227aa1704f4bbd775c1f3b03535b76dd93b0d4d8d22a519dca47df1547b
passwd_hash_algo: argon2
passwd: e82bc8ae42a53b98c3bd0f941aacc4aa2a264407534b0a11bf270137f67af912f694b67951f92148c45f91717e1478ca7889
passwd_hash_algo: pbkdf2$50000$50
must_change_password: false
login_source: 0
login_name: user19
@@ -711,8 +711,8 @@
email: user20@example.com
keep_email_private: false
email_notifications_preference: enabled
passwd: a3d5fcd92bae586c2e3dbe72daea7a0d27833a8d0227aa1704f4bbd775c1f3b03535b76dd93b0d4d8d22a519dca47df1547b
passwd_hash_algo: argon2
passwd: e82bc8ae42a53b98c3bd0f941aacc4aa2a264407534b0a11bf270137f67af912f694b67951f92148c45f91717e1478ca7889
passwd_hash_algo: pbkdf2$50000$50
must_change_password: false
login_source: 0
login_name: user20
@@ -748,8 +748,8 @@
email: user21@example.com
keep_email_private: false
email_notifications_preference: enabled
passwd: a3d5fcd92bae586c2e3dbe72daea7a0d27833a8d0227aa1704f4bbd775c1f3b03535b76dd93b0d4d8d22a519dca47df1547b
passwd_hash_algo: argon2
passwd: e82bc8ae42a53b98c3bd0f941aacc4aa2a264407534b0a11bf270137f67af912f694b67951f92148c45f91717e1478ca7889
passwd_hash_algo: pbkdf2$50000$50
must_change_password: false
login_source: 0
login_name: user21
@@ -785,8 +785,8 @@
email: limited_org@example.com
keep_email_private: false
email_notifications_preference: enabled
passwd: a3d5fcd92bae586c2e3dbe72daea7a0d27833a8d0227aa1704f4bbd775c1f3b03535b76dd93b0d4d8d22a519dca47df1547b
passwd_hash_algo: argon2
passwd: e82bc8ae42a53b98c3bd0f941aacc4aa2a264407534b0a11bf270137f67af912f694b67951f92148c45f91717e1478ca7889
passwd_hash_algo: pbkdf2$50000$50
must_change_password: false
login_source: 0
login_name: limited_org
@@ -822,8 +822,8 @@
email: privated_org@example.com
keep_email_private: false
email_notifications_preference: enabled
passwd: a3d5fcd92bae586c2e3dbe72daea7a0d27833a8d0227aa1704f4bbd775c1f3b03535b76dd93b0d4d8d22a519dca47df1547b
passwd_hash_algo: argon2
passwd: e82bc8ae42a53b98c3bd0f941aacc4aa2a264407534b0a11bf270137f67af912f694b67951f92148c45f91717e1478ca7889
passwd_hash_algo: pbkdf2$50000$50
must_change_password: false
login_source: 0
login_name: privated_org
@@ -859,8 +859,8 @@
email: user24@example.com
keep_email_private: true
email_notifications_preference: enabled
passwd: a3d5fcd92bae586c2e3dbe72daea7a0d27833a8d0227aa1704f4bbd775c1f3b03535b76dd93b0d4d8d22a519dca47df1547b
passwd_hash_algo: argon2
passwd: e82bc8ae42a53b98c3bd0f941aacc4aa2a264407534b0a11bf270137f67af912f694b67951f92148c45f91717e1478ca7889
passwd_hash_algo: pbkdf2$50000$50
must_change_password: false
login_source: 0
login_name: user24
@@ -896,8 +896,8 @@
email: org25@example.com
keep_email_private: false
email_notifications_preference: enabled
passwd: a3d5fcd92bae586c2e3dbe72daea7a0d27833a8d0227aa1704f4bbd775c1f3b03535b76dd93b0d4d8d22a519dca47df1547b
passwd_hash_algo: argon2
passwd: e82bc8ae42a53b98c3bd0f941aacc4aa2a264407534b0a11bf270137f67af912f694b67951f92148c45f91717e1478ca7889
passwd_hash_algo: pbkdf2$50000$50
must_change_password: false
login_source: 0
login_name: org25
@@ -933,8 +933,8 @@
email: org26@example.com
keep_email_private: false
email_notifications_preference: onmention
passwd: a3d5fcd92bae586c2e3dbe72daea7a0d27833a8d0227aa1704f4bbd775c1f3b03535b76dd93b0d4d8d22a519dca47df1547b
passwd_hash_algo: argon2
passwd: e82bc8ae42a53b98c3bd0f941aacc4aa2a264407534b0a11bf270137f67af912f694b67951f92148c45f91717e1478ca7889
passwd_hash_algo: pbkdf2$50000$50
must_change_password: false
login_source: 0
login_name: org26
@@ -970,8 +970,8 @@
email: user27@example.com
keep_email_private: false
email_notifications_preference: enabled
passwd: a3d5fcd92bae586c2e3dbe72daea7a0d27833a8d0227aa1704f4bbd775c1f3b03535b76dd93b0d4d8d22a519dca47df1547b
passwd_hash_algo: argon2
passwd: e82bc8ae42a53b98c3bd0f941aacc4aa2a264407534b0a11bf270137f67af912f694b67951f92148c45f91717e1478ca7889
passwd_hash_algo: pbkdf2$50000$50
must_change_password: false
login_source: 0
login_name: user27
@@ -1007,8 +1007,8 @@
email: user28@example.com
keep_email_private: true
email_notifications_preference: enabled
passwd: a3d5fcd92bae586c2e3dbe72daea7a0d27833a8d0227aa1704f4bbd775c1f3b03535b76dd93b0d4d8d22a519dca47df1547b
passwd_hash_algo: argon2
passwd: e82bc8ae42a53b98c3bd0f941aacc4aa2a264407534b0a11bf270137f67af912f694b67951f92148c45f91717e1478ca7889
passwd_hash_algo: pbkdf2$50000$50
must_change_password: false
login_source: 0
login_name: user28
@@ -1044,8 +1044,8 @@
email: user29@example.com
keep_email_private: false
email_notifications_preference: enabled
passwd: a3d5fcd92bae586c2e3dbe72daea7a0d27833a8d0227aa1704f4bbd775c1f3b03535b76dd93b0d4d8d22a519dca47df1547b
passwd_hash_algo: argon2
passwd: e82bc8ae42a53b98c3bd0f941aacc4aa2a264407534b0a11bf270137f67af912f694b67951f92148c45f91717e1478ca7889
passwd_hash_algo: pbkdf2$50000$50
must_change_password: false
login_source: 0
login_name: user29
@@ -1081,8 +1081,8 @@
email: user30@example.com
keep_email_private: false
email_notifications_preference: enabled
passwd: a3d5fcd92bae586c2e3dbe72daea7a0d27833a8d0227aa1704f4bbd775c1f3b03535b76dd93b0d4d8d22a519dca47df1547b
passwd_hash_algo: argon2
passwd: e82bc8ae42a53b98c3bd0f941aacc4aa2a264407534b0a11bf270137f67af912f694b67951f92148c45f91717e1478ca7889
passwd_hash_algo: pbkdf2$50000$50
must_change_password: false
login_source: 0
login_name: user30
@@ -1118,8 +1118,8 @@
email: user31@example.com
keep_email_private: false
email_notifications_preference: enabled
passwd: a3d5fcd92bae586c2e3dbe72daea7a0d27833a8d0227aa1704f4bbd775c1f3b03535b76dd93b0d4d8d22a519dca47df1547b
passwd_hash_algo: argon2
passwd: e82bc8ae42a53b98c3bd0f941aacc4aa2a264407534b0a11bf270137f67af912f694b67951f92148c45f91717e1478ca7889
passwd_hash_algo: pbkdf2$50000$50
must_change_password: false
login_source: 0
login_name: user31
@@ -1155,7 +1155,7 @@
email: user32@example.com
keep_email_private: false
email_notifications_preference: enabled
passwd: 7d93daa0d1e6f2305cc8fa496847d61dc7320bb16262f9c55dd753480207234cdd96a93194e408341971742f4701772a025a
passwd: 7d93daa0d1e6f2305cc8fa496847d61dc7320bb16262f9c55dd753480207234cdd96a93194e408341971742f47017
passwd_hash_algo: argon2
must_change_password: false
login_source: 0
@@ -1192,8 +1192,8 @@
email: user33@example.com
keep_email_private: false
email_notifications_preference: enabled
passwd: a3d5fcd92bae586c2e3dbe72daea7a0d27833a8d0227aa1704f4bbd775c1f3b03535b76dd93b0d4d8d22a519dca47df1547b
passwd_hash_algo: argon2
passwd: e82bc8ae42a53b98c3bd0f941aacc4aa2a264407534b0a11bf270137f67af912f694b67951f92148c45f91717e1478ca7889
passwd_hash_algo: pbkdf2$50000$50
must_change_password: false
login_source: 0
login_name: user33
+30 -154
View File
@@ -9,9 +9,7 @@ package issues
import (
"context"
"fmt"
"regexp"
"strconv"
"strings"
"unicode/utf8"
"code.gitea.io/gitea/models/db"
@@ -23,8 +21,6 @@ import (
"code.gitea.io/gitea/modules/git"
"code.gitea.io/gitea/modules/json"
"code.gitea.io/gitea/modules/log"
"code.gitea.io/gitea/modules/markup"
"code.gitea.io/gitea/modules/markup/markdown"
"code.gitea.io/gitea/modules/references"
"code.gitea.io/gitea/modules/structs"
"code.gitea.io/gitea/modules/timeutil"
@@ -697,31 +693,6 @@ func (c *Comment) LoadReview() error {
return c.loadReview(db.DefaultContext)
}
var notEnoughLines = regexp.MustCompile(`fatal: file .* has only \d+ lines?`)
func (c *Comment) checkInvalidation(doer *user_model.User, repo *git.Repository, branch string) error {
// FIXME differentiate between previous and proposed line
commit, err := repo.LineBlame(branch, repo.Path, c.TreePath, uint(c.UnsignedLine()))
if err != nil && (strings.Contains(err.Error(), "fatal: no such path") || notEnoughLines.MatchString(err.Error())) {
c.Invalidated = true
return UpdateComment(c, doer)
}
if err != nil {
return err
}
if c.CommitSHA != "" && c.CommitSHA != commit.ID.String() {
c.Invalidated = true
return UpdateComment(c, doer)
}
return nil
}
// CheckInvalidation checks if the line of code comment got changed by another commit.
// If the line got changed the comment is going to be invalidated.
func (c *Comment) CheckInvalidation(repo *git.Repository, doer *user_model.User, branch string) error {
return c.checkInvalidation(doer, repo, branch)
}
// DiffSide returns "previous" if Comment.Line is a LOC of the previous changes and "proposed" if it is a LOC of the proposed changes.
func (c *Comment) DiffSide() string {
if c.Line < 0 {
@@ -1065,23 +1036,28 @@ func GetCommentByID(ctx context.Context, id int64) (*Comment, error) {
// FindCommentsOptions describes the conditions to Find comments
type FindCommentsOptions struct {
db.ListOptions
RepoID int64
IssueID int64
ReviewID int64
Since int64
Before int64
Line int64
TreePath string
Type CommentType
RepoID int64
IssueID int64
ReviewID int64
Since int64
Before int64
Line int64
TreePath string
Type CommentType
IssueIDs []int64
Invalidated util.OptionalBool
}
func (opts *FindCommentsOptions) toConds() builder.Cond {
// ToConds implements FindOptions interface
func (opts *FindCommentsOptions) ToConds() builder.Cond {
cond := builder.NewCond()
if opts.RepoID > 0 {
cond = cond.And(builder.Eq{"issue.repo_id": opts.RepoID})
}
if opts.IssueID > 0 {
cond = cond.And(builder.Eq{"comment.issue_id": opts.IssueID})
} else if len(opts.IssueIDs) > 0 {
cond = cond.And(builder.In("comment.issue_id", opts.IssueIDs))
}
if opts.ReviewID > 0 {
cond = cond.And(builder.Eq{"comment.review_id": opts.ReviewID})
@@ -1101,13 +1077,16 @@ func (opts *FindCommentsOptions) toConds() builder.Cond {
if len(opts.TreePath) > 0 {
cond = cond.And(builder.Eq{"comment.tree_path": opts.TreePath})
}
if !opts.Invalidated.IsNone() {
cond = cond.And(builder.Eq{"comment.invalidated": opts.Invalidated.IsTrue()})
}
return cond
}
// FindComments returns all comments according options
func FindComments(ctx context.Context, opts *FindCommentsOptions) ([]*Comment, error) {
comments := make([]*Comment, 0, 10)
sess := db.GetEngine(ctx).Where(opts.toConds())
sess := db.GetEngine(ctx).Where(opts.ToConds())
if opts.RepoID > 0 {
sess.Join("INNER", "issue", "issue.id = comment.issue_id")
}
@@ -1126,13 +1105,19 @@ func FindComments(ctx context.Context, opts *FindCommentsOptions) ([]*Comment, e
// CountComments count all comments according options by ignoring pagination
func CountComments(opts *FindCommentsOptions) (int64, error) {
sess := db.GetEngine(db.DefaultContext).Where(opts.toConds())
sess := db.GetEngine(db.DefaultContext).Where(opts.ToConds())
if opts.RepoID > 0 {
sess.Join("INNER", "issue", "issue.id = comment.issue_id")
}
return sess.Count(&Comment{})
}
// UpdateCommentInvalidate updates comment invalidated column
func UpdateCommentInvalidate(ctx context.Context, c *Comment) error {
_, err := db.GetEngine(ctx).ID(c.ID).Cols("invalidated").Update(c)
return err
}
// UpdateComment updates information of comment.
func UpdateComment(c *Comment, doer *user_model.User) error {
ctx, committer, err := db.TxContext()
@@ -1191,120 +1176,6 @@ func DeleteComment(ctx context.Context, comment *Comment) error {
return DeleteReaction(ctx, &ReactionOptions{CommentID: comment.ID})
}
// CodeComments represents comments on code by using this structure: FILENAME -> LINE (+ == proposed; - == previous) -> COMMENTS
type CodeComments map[string]map[int64][]*Comment
// FetchCodeComments will return a 2d-map: ["Path"]["Line"] = Comments at line
func FetchCodeComments(ctx context.Context, issue *Issue, currentUser *user_model.User) (CodeComments, error) {
return fetchCodeCommentsByReview(ctx, issue, currentUser, nil)
}
func fetchCodeCommentsByReview(ctx context.Context, issue *Issue, currentUser *user_model.User, review *Review) (CodeComments, error) {
pathToLineToComment := make(CodeComments)
if review == nil {
review = &Review{ID: 0}
}
opts := FindCommentsOptions{
Type: CommentTypeCode,
IssueID: issue.ID,
ReviewID: review.ID,
}
comments, err := findCodeComments(ctx, opts, issue, currentUser, review)
if err != nil {
return nil, err
}
for _, comment := range comments {
if pathToLineToComment[comment.TreePath] == nil {
pathToLineToComment[comment.TreePath] = make(map[int64][]*Comment)
}
pathToLineToComment[comment.TreePath][comment.Line] = append(pathToLineToComment[comment.TreePath][comment.Line], comment)
}
return pathToLineToComment, nil
}
func findCodeComments(ctx context.Context, opts FindCommentsOptions, issue *Issue, currentUser *user_model.User, review *Review) ([]*Comment, error) {
var comments []*Comment
if review == nil {
review = &Review{ID: 0}
}
conds := opts.toConds()
if review.ID == 0 {
conds = conds.And(builder.Eq{"invalidated": false})
}
e := db.GetEngine(ctx)
if err := e.Where(conds).
Asc("comment.created_unix").
Asc("comment.id").
Find(&comments); err != nil {
return nil, err
}
if err := issue.LoadRepo(ctx); err != nil {
return nil, err
}
if err := CommentList(comments).loadPosters(ctx); err != nil {
return nil, err
}
// Find all reviews by ReviewID
reviews := make(map[int64]*Review)
ids := make([]int64, 0, len(comments))
for _, comment := range comments {
if comment.ReviewID != 0 {
ids = append(ids, comment.ReviewID)
}
}
if err := e.In("id", ids).Find(&reviews); err != nil {
return nil, err
}
n := 0
for _, comment := range comments {
if re, ok := reviews[comment.ReviewID]; ok && re != nil {
// If the review is pending only the author can see the comments (except if the review is set)
if review.ID == 0 && re.Type == ReviewTypePending &&
(currentUser == nil || currentUser.ID != re.ReviewerID) {
continue
}
comment.Review = re
}
comments[n] = comment
n++
if err := comment.LoadResolveDoer(); err != nil {
return nil, err
}
if err := comment.LoadReactions(issue.Repo); err != nil {
return nil, err
}
var err error
if comment.RenderedContent, err = markdown.RenderString(&markup.RenderContext{
Ctx: ctx,
URLPrefix: issue.Repo.Link(),
Metas: issue.Repo.ComposeMetas(),
}, comment.Content); err != nil {
return nil, err
}
}
return comments[:n], nil
}
// FetchCodeCommentsByLine fetches the code comments for a given treePath and line number
func FetchCodeCommentsByLine(ctx context.Context, issue *Issue, currentUser *user_model.User, treePath string, line int64) ([]*Comment, error) {
opts := FindCommentsOptions{
Type: CommentTypeCode,
IssueID: issue.ID,
TreePath: treePath,
Line: line,
}
return findCodeComments(ctx, opts, issue, currentUser, nil)
}
// UpdateCommentsMigrationsByType updates comments' migrations information via given git service type and original id and poster id
func UpdateCommentsMigrationsByType(tp structs.GitServiceType, originalAuthorID string, posterID int64) error {
_, err := db.GetEngine(db.DefaultContext).Table("comment").
@@ -1549,3 +1420,8 @@ func FixCommentTypeLabelWithOutsideLabels() (int64, error) {
return res.RowsAffected()
}
// HasOriginalAuthor returns if a comment was migrated and has an original author.
func (c *Comment) HasOriginalAuthor() bool {
return c.OriginalAuthor != "" && c.OriginalAuthorID != 0
}
+129
View File
@@ -0,0 +1,129 @@
// Copyright 2022 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package issues
import (
"context"
"code.gitea.io/gitea/models/db"
user_model "code.gitea.io/gitea/models/user"
"code.gitea.io/gitea/modules/markup"
"code.gitea.io/gitea/modules/markup/markdown"
"xorm.io/builder"
)
// CodeComments represents comments on code by using this structure: FILENAME -> LINE (+ == proposed; - == previous) -> COMMENTS
type CodeComments map[string]map[int64][]*Comment
// FetchCodeComments will return a 2d-map: ["Path"]["Line"] = Comments at line
func FetchCodeComments(ctx context.Context, issue *Issue, currentUser *user_model.User) (CodeComments, error) {
return fetchCodeCommentsByReview(ctx, issue, currentUser, nil)
}
func fetchCodeCommentsByReview(ctx context.Context, issue *Issue, currentUser *user_model.User, review *Review) (CodeComments, error) {
pathToLineToComment := make(CodeComments)
if review == nil {
review = &Review{ID: 0}
}
opts := FindCommentsOptions{
Type: CommentTypeCode,
IssueID: issue.ID,
ReviewID: review.ID,
}
comments, err := findCodeComments(ctx, opts, issue, currentUser, review)
if err != nil {
return nil, err
}
for _, comment := range comments {
if pathToLineToComment[comment.TreePath] == nil {
pathToLineToComment[comment.TreePath] = make(map[int64][]*Comment)
}
pathToLineToComment[comment.TreePath][comment.Line] = append(pathToLineToComment[comment.TreePath][comment.Line], comment)
}
return pathToLineToComment, nil
}
func findCodeComments(ctx context.Context, opts FindCommentsOptions, issue *Issue, currentUser *user_model.User, review *Review) ([]*Comment, error) {
var comments []*Comment
if review == nil {
review = &Review{ID: 0}
}
conds := opts.ToConds()
if review.ID == 0 {
conds = conds.And(builder.Eq{"invalidated": false})
}
e := db.GetEngine(ctx)
if err := e.Where(conds).
Asc("comment.created_unix").
Asc("comment.id").
Find(&comments); err != nil {
return nil, err
}
if err := issue.LoadRepo(ctx); err != nil {
return nil, err
}
if err := CommentList(comments).loadPosters(ctx); err != nil {
return nil, err
}
// Find all reviews by ReviewID
reviews := make(map[int64]*Review)
ids := make([]int64, 0, len(comments))
for _, comment := range comments {
if comment.ReviewID != 0 {
ids = append(ids, comment.ReviewID)
}
}
if err := e.In("id", ids).Find(&reviews); err != nil {
return nil, err
}
n := 0
for _, comment := range comments {
if re, ok := reviews[comment.ReviewID]; ok && re != nil {
// If the review is pending only the author can see the comments (except if the review is set)
if review.ID == 0 && re.Type == ReviewTypePending &&
(currentUser == nil || currentUser.ID != re.ReviewerID) {
continue
}
comment.Review = re
}
comments[n] = comment
n++
if err := comment.LoadResolveDoer(); err != nil {
return nil, err
}
if err := comment.LoadReactions(issue.Repo); err != nil {
return nil, err
}
var err error
if comment.RenderedContent, err = markdown.RenderString(&markup.RenderContext{
Ctx: ctx,
URLPrefix: issue.Repo.Link(),
Metas: issue.Repo.ComposeMetas(),
}, comment.Content); err != nil {
return nil, err
}
}
return comments[:n], nil
}
// FetchCodeCommentsByLine fetches the code comments for a given treePath and line number
func FetchCodeCommentsByLine(ctx context.Context, issue *Issue, currentUser *user_model.User, treePath string, line int64) ([]*Comment, error) {
opts := FindCommentsOptions{
Type: CommentTypeCode,
IssueID: issue.ID,
TreePath: treePath,
Line: line,
}
return findCodeComments(ctx, opts, issue, currentUser, nil)
}
+5
View File
@@ -2466,3 +2466,8 @@ func DeleteOrphanedIssues() error {
}
return nil
}
// HasOriginalAuthor returns if an issue was migrated and has an original author.
func (issue *Issue) HasOriginalAuthor() bool {
return issue.OriginalAuthor != "" && issue.OriginalAuthorID != 0
}
+4 -4
View File
@@ -755,7 +755,7 @@ func CountOrphanedLabels() (int64, error) {
norepo, err := db.GetEngine(db.DefaultContext).Table("label").
Where(builder.And(
builder.Gt{"repo_id": 0},
builder.NotIn("repo_id", builder.Select("id").From("repository")),
builder.NotIn("repo_id", builder.Select("id").From("`repository`")),
)).
Count()
if err != nil {
@@ -765,7 +765,7 @@ func CountOrphanedLabels() (int64, error) {
noorg, err := db.GetEngine(db.DefaultContext).Table("label").
Where(builder.And(
builder.Gt{"org_id": 0},
builder.NotIn("org_id", builder.Select("id").From("user")),
builder.NotIn("org_id", builder.Select("id").From("`user`")),
)).
Count()
if err != nil {
@@ -786,7 +786,7 @@ func DeleteOrphanedLabels() error {
if _, err := db.GetEngine(db.DefaultContext).
Where(builder.And(
builder.Gt{"repo_id": 0},
builder.NotIn("repo_id", builder.Select("id").From("repository")),
builder.NotIn("repo_id", builder.Select("id").From("`repository`")),
)).
Delete(Label{}); err != nil {
return err
@@ -796,7 +796,7 @@ func DeleteOrphanedLabels() error {
if _, err := db.GetEngine(db.DefaultContext).
Where(builder.And(
builder.Gt{"org_id": 0},
builder.NotIn("org_id", builder.Select("id").From("user")),
builder.NotIn("org_id", builder.Select("id").From("`user`")),
)).
Delete(Label{}); err != nil {
return err
+60 -2
View File
@@ -9,6 +9,7 @@ import (
"context"
"fmt"
"io"
"strconv"
"strings"
"code.gitea.io/gitea/models/db"
@@ -133,6 +134,27 @@ const (
PullRequestStatusAncestor
)
func (status PullRequestStatus) String() string {
switch status {
case PullRequestStatusConflict:
return "CONFLICT"
case PullRequestStatusChecking:
return "CHECKING"
case PullRequestStatusMergeable:
return "MERGEABLE"
case PullRequestStatusManuallyMerged:
return "MANUALLY_MERGED"
case PullRequestStatusError:
return "ERROR"
case PullRequestStatusEmpty:
return "EMPTY"
case PullRequestStatusAncestor:
return "ANCESTOR"
default:
return strconv.Itoa(int(status))
}
}
// PullRequestFlow the flow of pull request
type PullRequestFlow int
@@ -204,6 +226,42 @@ func DeletePullsByBaseRepoID(ctx context.Context, repoID int64) error {
return err
}
// ColorFormat writes a colored string to identify this struct
func (pr *PullRequest) ColorFormat(s fmt.State) {
if pr == nil {
log.ColorFprintf(s, "PR[%d]%s#%d[%s...%s:%s]",
log.NewColoredIDValue(0),
log.NewColoredValue("<nil>/<nil>"),
log.NewColoredIDValue(0),
log.NewColoredValue("<nil>"),
log.NewColoredValue("<nil>/<nil>"),
log.NewColoredValue("<nil>"),
)
return
}
log.ColorFprintf(s, "PR[%d]", log.NewColoredIDValue(pr.ID))
if pr.BaseRepo != nil {
log.ColorFprintf(s, "%s#%d[%s...", log.NewColoredValue(pr.BaseRepo.FullName()),
log.NewColoredIDValue(pr.Index), log.NewColoredValue(pr.BaseBranch))
} else {
log.ColorFprintf(s, "Repo[%d]#%d[%s...", log.NewColoredIDValue(pr.BaseRepoID),
log.NewColoredIDValue(pr.Index), log.NewColoredValue(pr.BaseBranch))
}
if pr.HeadRepoID == pr.BaseRepoID {
log.ColorFprintf(s, "%s]", log.NewColoredValue(pr.HeadBranch))
} else if pr.HeadRepo != nil {
log.ColorFprintf(s, "%s:%s]", log.NewColoredValue(pr.HeadRepo.FullName()), log.NewColoredValue(pr.HeadBranch))
} else {
log.ColorFprintf(s, "Repo[%d]:%s]", log.NewColoredIDValue(pr.HeadRepoID), log.NewColoredValue(pr.HeadBranch))
}
}
// String represents the pr as a simple string
func (pr *PullRequest) String() string {
return log.ColorFormatAsString(pr)
}
// MustHeadUserName returns the HeadRepo's username if failed return blank
func (pr *PullRequest) MustHeadUserName() string {
if err := pr.LoadHeadRepo(); err != nil {
@@ -255,7 +313,7 @@ func (pr *PullRequest) LoadHeadRepoCtx(ctx context.Context) (err error) {
pr.HeadRepo, err = repo_model.GetRepositoryByIDCtx(ctx, pr.HeadRepoID)
if err != nil && !repo_model.IsErrRepoNotExist(err) { // Head repo maybe deleted, but it should still work
return fmt.Errorf("getRepositoryByID(head): %w", err)
return fmt.Errorf("pr[%d].LoadHeadRepo[%d]: %w", pr.ID, pr.HeadRepoID, err)
}
pr.isHeadRepoLoaded = true
}
@@ -290,7 +348,7 @@ func (pr *PullRequest) LoadBaseRepoCtx(ctx context.Context) (err error) {
pr.BaseRepo, err = repo_model.GetRepositoryByIDCtx(ctx, pr.BaseRepoID)
if err != nil {
return fmt.Errorf("repo_model.GetRepositoryByID(base): %w", err)
return fmt.Errorf("pr[%d].LoadBaseRepo[%d]: %w", pr.ID, pr.BaseRepoID, err)
}
return nil
}
+12 -30
View File
@@ -13,7 +13,6 @@ import (
"code.gitea.io/gitea/models/unit"
user_model "code.gitea.io/gitea/models/user"
"code.gitea.io/gitea/modules/base"
"code.gitea.io/gitea/modules/git"
"code.gitea.io/gitea/modules/log"
"xorm.io/xorm"
@@ -53,13 +52,16 @@ func listPullRequestStatement(baseRepoID int64, opts *PullRequestsOptions) (*xor
// GetUnmergedPullRequestsByHeadInfo returns all pull requests that are open and has not been merged
// by given head information (repo and branch).
func GetUnmergedPullRequestsByHeadInfo(repoID int64, branch string) ([]*PullRequest, error) {
// arg `includeClosed` controls whether the SQL returns closed PRs
func GetUnmergedPullRequestsByHeadInfo(repoID int64, branch string, includeClosed bool) ([]*PullRequest, error) {
prs := make([]*PullRequest, 0, 2)
return prs, db.GetEngine(db.DefaultContext).
Where("head_repo_id = ? AND head_branch = ? AND has_merged = ? AND issue.is_closed = ? AND flow = ?",
repoID, branch, false, false, PullRequestFlowGithub).
sess := db.GetEngine(db.DefaultContext).
Join("INNER", "issue", "issue.id = pull_request.issue_id").
Find(&prs)
Where("head_repo_id = ? AND head_branch = ? AND has_merged = ? AND flow = ?", repoID, branch, false, PullRequestFlowGithub)
if !includeClosed {
sess.Where("issue.is_closed = ?", false)
}
return prs, sess.Find(&prs)
}
// CanMaintainerWriteToBranch check whether user is a maintainer and could write to the branch
@@ -72,7 +74,7 @@ func CanMaintainerWriteToBranch(p access_model.Permission, branch string, user *
return false
}
prs, err := GetUnmergedPullRequestsByHeadInfo(p.Units[0].RepoID, branch)
prs, err := GetUnmergedPullRequestsByHeadInfo(p.Units[0].RepoID, branch, false)
if err != nil {
return false
}
@@ -162,7 +164,7 @@ func (prs PullRequestList) loadAttributes(ctx context.Context) error {
}
// Load issues.
issueIDs := prs.getIssueIDs()
issueIDs := prs.GetIssueIDs()
issues := make([]*Issue, 0, len(issueIDs))
if err := db.GetEngine(ctx).
Where("id > 0").
@@ -181,7 +183,8 @@ func (prs PullRequestList) loadAttributes(ctx context.Context) error {
return nil
}
func (prs PullRequestList) getIssueIDs() []int64 {
// GetIssueIDs returns all issue ids
func (prs PullRequestList) GetIssueIDs() []int64 {
issueIDs := make([]int64, 0, len(prs))
for i := range prs {
issueIDs = append(issueIDs, prs[i].IssueID)
@@ -193,24 +196,3 @@ func (prs PullRequestList) getIssueIDs() []int64 {
func (prs PullRequestList) LoadAttributes() error {
return prs.loadAttributes(db.DefaultContext)
}
// InvalidateCodeComments will lookup the prs for code comments which got invalidated by change
func (prs PullRequestList) InvalidateCodeComments(ctx context.Context, doer *user_model.User, repo *git.Repository, branch string) error {
if len(prs) == 0 {
return nil
}
issueIDs := prs.getIssueIDs()
var codeComments []*Comment
if err := db.GetEngine(ctx).
Where("type = ? and invalidated = ?", CommentTypeCode, false).
In("issue_id", issueIDs).
Find(&codeComments); err != nil {
return fmt.Errorf("find code comments: %w", err)
}
for _, comment := range codeComments {
if err := comment.CheckInvalidation(repo, doer, branch); err != nil {
return err
}
}
return nil
}
+1 -1
View File
@@ -119,7 +119,7 @@ func TestHasUnmergedPullRequestsByHeadInfo(t *testing.T) {
func TestGetUnmergedPullRequestsByHeadInfo(t *testing.T) {
assert.NoError(t, unittest.PrepareTestDatabase())
prs, err := issues_model.GetUnmergedPullRequestsByHeadInfo(1, "branch2")
prs, err := issues_model.GetUnmergedPullRequestsByHeadInfo(1, "branch2", false)
assert.NoError(t, err)
assert.Len(t, prs, 1)
for _, pr := range prs {
+4 -4
View File
@@ -978,7 +978,7 @@ func DeleteReview(r *Review) error {
ReviewID: r.ID,
}
if _, err := sess.Where(opts.toConds()).Delete(new(Comment)); err != nil {
if _, err := sess.Where(opts.ToConds()).Delete(new(Comment)); err != nil {
return err
}
@@ -988,7 +988,7 @@ func DeleteReview(r *Review) error {
ReviewID: r.ID,
}
if _, err := sess.Where(opts.toConds()).Delete(new(Comment)); err != nil {
if _, err := sess.Where(opts.ToConds()).Delete(new(Comment)); err != nil {
return err
}
@@ -1012,7 +1012,7 @@ func (r *Review) GetCodeCommentsCount() int {
IssueID: r.IssueID,
ReviewID: r.ID,
}
conds := opts.toConds()
conds := opts.ToConds()
if r.ID == 0 {
conds = conds.And(builder.Eq{"invalidated": false})
}
@@ -1032,7 +1032,7 @@ func (r *Review) HTMLURL() string {
ReviewID: r.ID,
}
comment := new(Comment)
has, err := db.GetEngine(db.DefaultContext).Where(opts.toConds()).Get(comment)
has, err := db.GetEngine(db.DefaultContext).Where(opts.ToConds()).Get(comment)
if err != nil || !has {
return ""
}
+4 -3
View File
@@ -396,13 +396,14 @@ func (org *Organization) GetOrgUserMaxAuthorizeLevel(uid int64) (perm.AccessMode
}
// GetUsersWhoCanCreateOrgRepo returns users which are able to create repo in organization
func GetUsersWhoCanCreateOrgRepo(ctx context.Context, orgID int64) ([]*user_model.User, error) {
users := make([]*user_model.User, 0, 10)
func GetUsersWhoCanCreateOrgRepo(ctx context.Context, orgID int64) (map[int64]*user_model.User, error) {
// Use a map, in order to de-duplicate users.
users := make(map[int64]*user_model.User)
return users, db.GetEngine(ctx).
Join("INNER", "`team_user`", "`team_user`.uid=`user`.id").
Join("INNER", "`team`", "`team`.id=`team_user`.team_id").
Where(builder.Eq{"team.can_create_org_repo": true}.Or(builder.Eq{"team.authorize": perm.AccessModeOwner})).
And("team_user.org_id = ?", orgID).Asc("`user`.name").Find(&users)
And("team_user.org_id = ?", orgID).Find(&users)
}
// SearchOrganizationsOptions options to filter organizations
+5 -4
View File
@@ -92,11 +92,12 @@ func TestUser_GetTeams(t *testing.T) {
org := unittest.AssertExistsAndLoadBean(t, &organization.Organization{ID: 3})
teams, err := org.LoadTeams()
assert.NoError(t, err)
if assert.Len(t, teams, 4) {
if assert.Len(t, teams, 5) {
assert.Equal(t, int64(1), teams[0].ID)
assert.Equal(t, int64(2), teams[1].ID)
assert.Equal(t, int64(12), teams[2].ID)
assert.Equal(t, int64(7), teams[3].ID)
assert.Equal(t, int64(14), teams[3].ID)
assert.Equal(t, int64(7), teams[4].ID)
}
}
@@ -293,7 +294,7 @@ func TestUser_GetUserTeamIDs(t *testing.T) {
assert.NoError(t, err)
assert.Equal(t, expected, teamIDs)
}
testSuccess(2, []int64{1, 2})
testSuccess(2, []int64{1, 2, 14})
testSuccess(4, []int64{2})
testSuccess(unittest.NonexistentID, []int64{})
}
@@ -448,7 +449,7 @@ func TestGetUsersWhoCanCreateOrgRepo(t *testing.T) {
users, err = organization.GetUsersWhoCanCreateOrgRepo(db.DefaultContext, 7)
assert.NoError(t, err)
assert.Len(t, users, 1)
assert.EqualValues(t, 5, users[0].ID)
assert.NotNil(t, users[5])
}
func TestUser_RemoveOrgRepo(t *testing.T) {
+49
View File
@@ -0,0 +1,49 @@
// Copyright 2023 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package user
import (
"context"
"fmt"
"strings"
"code.gitea.io/gitea/models/db"
"xorm.io/builder"
)
func SetMustChangePassword(ctx context.Context, all, mustChangePassword bool, include, exclude []string) (int64, error) {
sliceTrimSpaceDropEmpty := func(input []string) []string {
output := make([]string, 0, len(input))
for _, in := range input {
in = strings.ToLower(strings.TrimSpace(in))
if in == "" {
continue
}
output = append(output, in)
}
return output
}
var cond builder.Cond
// Only include the users where something changes to get an accurate count
cond = builder.Neq{"must_change_password": mustChangePassword}
if !all {
include = sliceTrimSpaceDropEmpty(include)
if len(include) == 0 {
return 0, fmt.Errorf("no users to include provided")
}
cond = cond.And(builder.In("lower_name", include))
}
exclude = sliceTrimSpaceDropEmpty(exclude)
if len(exclude) > 0 {
cond = cond.And(builder.NotIn("lower_name", exclude))
}
return db.GetEngine(ctx).Where(cond).MustCols("must_change_password").Update(&User{MustChangePassword: mustChangePassword})
}
+4 -71
View File
@@ -7,8 +7,6 @@ package user
import (
"context"
"crypto/sha256"
"crypto/subtle"
"encoding/hex"
"fmt"
"net/url"
@@ -22,6 +20,7 @@ import (
"code.gitea.io/gitea/models/auth"
"code.gitea.io/gitea/models/db"
"code.gitea.io/gitea/modules/auth/openid"
"code.gitea.io/gitea/modules/auth/password/hash"
"code.gitea.io/gitea/modules/base"
"code.gitea.io/gitea/modules/git"
"code.gitea.io/gitea/modules/log"
@@ -30,10 +29,6 @@ import (
"code.gitea.io/gitea/modules/timeutil"
"code.gitea.io/gitea/modules/util"
"golang.org/x/crypto/argon2"
"golang.org/x/crypto/bcrypt"
"golang.org/x/crypto/pbkdf2"
"golang.org/x/crypto/scrypt"
"xorm.io/builder"
)
@@ -48,21 +43,6 @@ const (
UserTypeOrganization
)
const (
algoBcrypt = "bcrypt"
algoScrypt = "scrypt"
algoArgon2 = "argon2"
algoPbkdf2 = "pbkdf2"
)
// AvailableHashAlgorithms represents the available password hashing algorithms
var AvailableHashAlgorithms = []string{
algoPbkdf2,
algoArgon2,
algoScrypt,
algoBcrypt,
}
const (
// EmailNotificationsEnabled indicates that the user would like to receive all email notifications except your own
EmailNotificationsEnabled = "enabled"
@@ -368,42 +348,6 @@ func (u *User) NewGitSig() *git.Signature {
}
}
func hashPassword(passwd, salt, algo string) (string, error) {
var tempPasswd []byte
var saltBytes []byte
// There are two formats for the Salt value:
// * The new format is a (32+)-byte hex-encoded string
// * The old format was a 10-byte binary format
// We have to tolerate both here but Authenticate should
// regenerate the Salt following a successful validation.
if len(salt) == 10 {
saltBytes = []byte(salt)
} else {
var err error
saltBytes, err = hex.DecodeString(salt)
if err != nil {
return "", err
}
}
switch algo {
case algoBcrypt:
tempPasswd, _ = bcrypt.GenerateFromPassword([]byte(passwd), bcrypt.DefaultCost)
return string(tempPasswd), nil
case algoScrypt:
tempPasswd, _ = scrypt.Key([]byte(passwd), saltBytes, 65536, 16, 2, 50)
case algoArgon2:
tempPasswd = argon2.IDKey([]byte(passwd), saltBytes, 2, 65536, 8, 50)
case algoPbkdf2:
fallthrough
default:
tempPasswd = pbkdf2.Key([]byte(passwd), saltBytes, 10000, 50, sha256.New)
}
return fmt.Sprintf("%x", tempPasswd), nil
}
// SetPassword hashes a password using the algorithm defined in the config value of PASSWORD_HASH_ALGO
// change passwd, salt and passwd_hash_algo fields
func (u *User) SetPassword(passwd string) (err error) {
@@ -417,7 +361,7 @@ func (u *User) SetPassword(passwd string) (err error) {
if u.Salt, err = GetUserSalt(); err != nil {
return err
}
if u.Passwd, err = hashPassword(passwd, u.Salt, setting.PasswordHashAlgo); err != nil {
if u.Passwd, err = hash.Parse(setting.PasswordHashAlgo).Hash(passwd, u.Salt); err != nil {
return err
}
u.PasswdHashAlgo = setting.PasswordHashAlgo
@@ -425,20 +369,9 @@ func (u *User) SetPassword(passwd string) (err error) {
return nil
}
// ValidatePassword checks if given password matches the one belongs to the user.
// ValidatePassword checks if the given password matches the one belonging to the user.
func (u *User) ValidatePassword(passwd string) bool {
tempHash, err := hashPassword(passwd, u.Salt, u.PasswdHashAlgo)
if err != nil {
return false
}
if u.PasswdHashAlgo != algoBcrypt && subtle.ConstantTimeCompare([]byte(u.Passwd), []byte(tempHash)) == 1 {
return true
}
if u.PasswdHashAlgo == algoBcrypt && bcrypt.CompareHashAndPassword([]byte(u.Passwd), []byte(passwd)) == nil {
return true
}
return false
return hash.Parse(u.PasswdHashAlgo).VerifyPassword(passwd, u.Passwd, u.Salt)
}
// IsPasswordSet checks if the password is set or left empty
+2 -1
View File
@@ -13,6 +13,7 @@ import (
"code.gitea.io/gitea/models/db"
"code.gitea.io/gitea/models/unittest"
user_model "code.gitea.io/gitea/models/user"
"code.gitea.io/gitea/modules/auth/password/hash"
"code.gitea.io/gitea/modules/setting"
"code.gitea.io/gitea/modules/structs"
"code.gitea.io/gitea/modules/util"
@@ -162,7 +163,7 @@ func TestEmailNotificationPreferences(t *testing.T) {
func TestHashPasswordDeterministic(t *testing.T) {
b := make([]byte, 16)
u := &user_model.User{}
algos := []string{"argon2", "pbkdf2", "scrypt", "bcrypt"}
algos := hash.RecommendedHashAlgorithms
for j := 0; j < len(algos); j++ {
u.PasswdHashAlgo = algos[j]
for i := 0; i < 50; i++ {
+77
View File
@@ -0,0 +1,77 @@
// Copyright 2023 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package hash
import (
"encoding/hex"
"strings"
"code.gitea.io/gitea/modules/log"
"golang.org/x/crypto/argon2"
)
func init() {
Register("argon2", NewArgon2Hasher)
}
// Argon2Hasher implements PasswordHasher
// and uses the Argon2 key derivation function, hybrant variant
type Argon2Hasher struct {
time uint32
memory uint32
threads uint8
keyLen uint32
}
// HashWithSaltBytes a provided password and salt
func (hasher *Argon2Hasher) HashWithSaltBytes(password string, salt []byte) string {
if hasher == nil {
return ""
}
return hex.EncodeToString(argon2.IDKey([]byte(password), salt, hasher.time, hasher.memory, hasher.threads, hasher.keyLen))
}
// NewArgon2Hasher is a factory method to create an Argon2Hasher
// The provided config should be either empty or of the form:
// "<time>$<memory>$<threads>$<keyLen>", where <x> is the string representation
// of an integer
func NewArgon2Hasher(config string) *Argon2Hasher {
// This default configuration uses the following parameters:
// time=2, memory=64*1024, threads=8, keyLen=50.
// It will make two passes through the memory, using 64MiB in total.
hasher := &Argon2Hasher{
time: 2,
memory: 1 << 16,
threads: 8,
keyLen: 50,
}
if config == "" {
return hasher
}
vals := strings.SplitN(config, "$", 4)
if len(vals) != 4 {
log.Error("invalid argon2 hash spec %s", config)
return nil
}
parsed, err := parseUIntParam(vals[0], "time", "argon2", config, nil)
hasher.time = uint32(parsed)
parsed, err = parseUIntParam(vals[1], "memory", "argon2", config, err)
hasher.memory = uint32(parsed)
parsed, err = parseUIntParam(vals[2], "threads", "argon2", config, err)
hasher.threads = uint8(parsed)
parsed, err = parseUIntParam(vals[3], "keyLen", "argon2", config, err)
hasher.keyLen = uint32(parsed)
if err != nil {
return nil
}
return hasher
}
+51
View File
@@ -0,0 +1,51 @@
// Copyright 2023 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package hash
import (
"golang.org/x/crypto/bcrypt"
)
func init() {
Register("bcrypt", NewBcryptHasher)
}
// BcryptHasher implements PasswordHasher
// and uses the bcrypt password hash function.
type BcryptHasher struct {
cost int
}
// HashWithSaltBytes a provided password and salt
func (hasher *BcryptHasher) HashWithSaltBytes(password string, salt []byte) string {
if hasher == nil {
return ""
}
hashedPassword, _ := bcrypt.GenerateFromPassword([]byte(password), hasher.cost)
return string(hashedPassword)
}
func (hasher *BcryptHasher) VerifyPassword(password, hashedPassword, salt string) bool {
return bcrypt.CompareHashAndPassword([]byte(hashedPassword), []byte(password)) == nil
}
// NewBcryptHasher is a factory method to create an BcryptHasher
// The provided config should be either empty or the string representation of the "<cost>"
// as an integer
func NewBcryptHasher(config string) *BcryptHasher {
hasher := &BcryptHasher{
cost: 10, // cost=10. i.e. 2^10 rounds of key expansion.
}
if config == "" {
return hasher
}
var err error
hasher.cost, err = parseIntParam(config, "cost", "bcrypt", config, nil)
if err != nil {
return nil
}
return hasher
}
+28
View File
@@ -0,0 +1,28 @@
// Copyright 2023 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package hash
import (
"strconv"
"code.gitea.io/gitea/modules/log"
)
func parseIntParam(value, param, algorithmName, config string, previousErr error) (int, error) {
parsed, err := strconv.Atoi(value)
if err != nil {
log.Error("invalid integer for %s representation in %s hash spec %s", param, algorithmName, config)
return 0, err
}
return parsed, previousErr // <- Keep the previous error as this function should still return an error once everything has been checked if any call failed
}
func parseUIntParam(value, param, algorithmName, config string, previousErr error) (uint64, error) {
parsed, err := strconv.ParseUint(value, 10, 64)
if err != nil {
log.Error("invalid integer for %s representation in %s hash spec %s", param, algorithmName, config)
return 0, err
}
return parsed, previousErr // <- Keep the previous error as this function should still return an error once everything has been checked if any call failed
}
+147
View File
@@ -0,0 +1,147 @@
// Copyright 2023 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package hash
import (
"crypto/subtle"
"encoding/hex"
"fmt"
"strings"
"sync/atomic"
"code.gitea.io/gitea/modules/log"
)
// This package takes care of hashing passwords, verifying passwords, defining
// available password algorithms, defining recommended password algorithms and
// choosing the default password algorithm.
// PasswordSaltHasher will hash a provided password with the provided saltBytes
type PasswordSaltHasher interface {
HashWithSaltBytes(password string, saltBytes []byte) string
}
// PasswordHasher will hash a provided password with the salt
type PasswordHasher interface {
Hash(password, salt string) (string, error)
}
// PasswordVerifier will ensure that a providedPassword matches the hashPassword when hashed with the salt
type PasswordVerifier interface {
VerifyPassword(providedPassword, hashedPassword, salt string) bool
}
// PasswordHashAlgorithms are named PasswordSaltHashers with a default verifier and hash function
type PasswordHashAlgorithm struct {
PasswordSaltHasher
Name string
}
// Hash the provided password with the salt and return the hash
func (algorithm *PasswordHashAlgorithm) Hash(password, salt string) (string, error) {
var saltBytes []byte
// There are two formats for the salt value:
// * The new format is a (32+)-byte hex-encoded string
// * The old format was a 10-byte binary format
// We have to tolerate both here.
if len(salt) == 10 {
saltBytes = []byte(salt)
} else {
var err error
saltBytes, err = hex.DecodeString(salt)
if err != nil {
return "", err
}
}
return algorithm.HashWithSaltBytes(password, saltBytes), nil
}
// Verify the provided password matches the hashPassword when hashed with the salt
func (algorithm *PasswordHashAlgorithm) VerifyPassword(providedPassword, hashedPassword, salt string) bool {
// The bcrypt package has its own specialized compare function that takes into
// account the stored password's bcrypt parameters.
if verifier, ok := algorithm.PasswordSaltHasher.(PasswordVerifier); ok {
return verifier.VerifyPassword(providedPassword, hashedPassword, salt)
}
// Compute the hash of the password.
providedPasswordHash, err := algorithm.Hash(providedPassword, salt)
if err != nil {
log.Error("passwordhash: %v.Hash(): %v", algorithm.Name, err)
return false
}
// Compare it against the hashed password in constant-time.
return subtle.ConstantTimeCompare([]byte(hashedPassword), []byte(providedPasswordHash)) == 1
}
var (
lastNonDefaultAlgorithm atomic.Value
availableHasherFactories = map[string]func(string) PasswordSaltHasher{}
)
// Register registers a PasswordSaltHasher with the availableHasherFactories
// This is not thread safe.
func Register[T PasswordSaltHasher](name string, newFn func(config string) T) {
if _, has := availableHasherFactories[name]; has {
panic(fmt.Errorf("duplicate registration of password salt hasher: %s", name))
}
availableHasherFactories[name] = func(config string) PasswordSaltHasher {
n := newFn(config)
return n
}
}
// In early versions of gitea the password hash algorithm field could be empty
// At that point the default was `pbkdf2` without configuration values
// Please note this is not the same as the DefaultAlgorithm
const defaultEmptyHashAlgorithmName = "pbkdf2"
func Parse(algorithm string) *PasswordHashAlgorithm {
if algorithm == "" {
algorithm = defaultEmptyHashAlgorithmName
}
if DefaultHashAlgorithm != nil && algorithm == DefaultHashAlgorithm.Name {
return DefaultHashAlgorithm
}
ptr := lastNonDefaultAlgorithm.Load()
if ptr != nil {
hashAlgorithm, ok := ptr.(*PasswordHashAlgorithm)
if ok && hashAlgorithm.Name == algorithm {
return hashAlgorithm
}
}
vals := strings.SplitN(algorithm, "$", 2)
var name string
var config string
if len(vals) == 0 {
return nil
}
name = vals[0]
if len(vals) > 1 {
config = vals[1]
}
newFn, has := availableHasherFactories[name]
if !has {
return nil
}
ph := newFn(config)
if ph == nil {
return nil
}
hashAlgorithm := &PasswordHashAlgorithm{
PasswordSaltHasher: ph,
Name: algorithm,
}
lastNonDefaultAlgorithm.Store(hashAlgorithm)
return hashAlgorithm
}
+186
View File
@@ -0,0 +1,186 @@
// Copyright 2023 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package hash
import (
"encoding/hex"
"strconv"
"strings"
"testing"
"github.com/stretchr/testify/assert"
)
type testSaltHasher string
func (t testSaltHasher) HashWithSaltBytes(password string, salt []byte) string {
return password + "$" + string(salt) + "$" + string(t)
}
func Test_registerHasher(t *testing.T) {
Register("Test_registerHasher", func(config string) testSaltHasher {
return testSaltHasher(config)
})
assert.Panics(t, func() {
Register("Test_registerHasher", func(config string) testSaltHasher {
return testSaltHasher(config)
})
})
assert.Equal(t, "password$salt$",
Parse("Test_registerHasher").PasswordSaltHasher.HashWithSaltBytes("password", []byte("salt")))
assert.Equal(t, "password$salt$config",
Parse("Test_registerHasher$config").PasswordSaltHasher.HashWithSaltBytes("password", []byte("salt")))
delete(availableHasherFactories, "Test_registerHasher")
}
func TestParse(t *testing.T) {
hashAlgorithmsToTest := []string{}
for plainHashAlgorithmNames := range availableHasherFactories {
hashAlgorithmsToTest = append(hashAlgorithmsToTest, plainHashAlgorithmNames)
}
for _, aliased := range aliasAlgorithmNames {
if strings.Contains(aliased, "$") {
hashAlgorithmsToTest = append(hashAlgorithmsToTest, aliased)
}
}
for _, algorithmName := range hashAlgorithmsToTest {
t.Run(algorithmName, func(t *testing.T) {
algo := Parse(algorithmName)
assert.NotNil(t, algo, "Algorithm %s resulted in an empty algorithm", algorithmName)
})
}
}
func TestHashing(t *testing.T) {
hashAlgorithmsToTest := []string{}
for plainHashAlgorithmNames := range availableHasherFactories {
hashAlgorithmsToTest = append(hashAlgorithmsToTest, plainHashAlgorithmNames)
}
for _, aliased := range aliasAlgorithmNames {
if strings.Contains(aliased, "$") {
hashAlgorithmsToTest = append(hashAlgorithmsToTest, aliased)
}
}
runTests := func(password, salt string, shouldPass bool) {
for _, algorithmName := range hashAlgorithmsToTest {
t.Run(algorithmName, func(t *testing.T) {
output, err := Parse(algorithmName).Hash(password, salt)
if shouldPass {
assert.NoError(t, err)
assert.NotEmpty(t, output, "output for %s was empty", algorithmName)
} else {
assert.Error(t, err)
}
assert.Equal(t, Parse(algorithmName).VerifyPassword(password, output, salt), shouldPass)
})
}
}
// Test with new salt format.
runTests(strings.Repeat("a", 16), hex.EncodeToString([]byte{0x01, 0x02, 0x03}), true)
// Test with legacy salt format.
runTests(strings.Repeat("a", 16), strings.Repeat("b", 10), true)
// Test with invalid salt.
runTests(strings.Repeat("a", 16), "a", false)
}
// vectors were generated using the current codebase.
var vectors = []struct {
algorithms []string
password string
salt string
output string
shouldfail bool
}{
{
algorithms: []string{"bcrypt", "bcrypt$10"},
password: "abcdef",
salt: strings.Repeat("a", 10),
output: "$2a$10$fjtm8BsQ2crym01/piJroenO3oSVUBhSLKaGdTYJ4tG0ePVCrU0G2",
shouldfail: false,
},
{
algorithms: []string{"scrypt", "scrypt$65536$16$2$50"},
password: "abcdef",
salt: strings.Repeat("a", 10),
output: "3b571d0c07c62d42b7bad3dbf18fb0cd67d4d8cd4ad4c6928e1090e5b2a4a84437c6fd2627d897c0e7e65025ca62b67a0002",
shouldfail: false,
},
{
algorithms: []string{"argon2", "argon2$2$65536$8$50"},
password: "abcdef",
salt: strings.Repeat("a", 10),
output: "551f089f570f989975b6f7c6a8ff3cf89bc486dd7bbe87ed4d80ad4362f8ee599ec8dda78dac196301b98456402bcda775dc",
shouldfail: false,
},
{
algorithms: []string{"pbkdf2", "pbkdf2$10000$50"},
password: "abcdef",
salt: strings.Repeat("a", 10),
output: "ab48d5471b7e6ed42d10001db88c852ff7303c788e49da5c3c7b63d5adf96360303724b74b679223a3dea8a242d10abb1913",
shouldfail: false,
},
{
algorithms: []string{"bcrypt", "bcrypt$10"},
password: "abcdef",
salt: hex.EncodeToString([]byte{0x01, 0x02, 0x03, 0x04}),
output: "$2a$10$qhgm32w9ZpqLygugWJsLjey8xRGcaq9iXAfmCeNBXxddgyoaOC3Gq",
shouldfail: false,
},
{
algorithms: []string{"scrypt", "scrypt$65536$16$2$50"},
password: "abcdef",
salt: hex.EncodeToString([]byte{0x01, 0x02, 0x03, 0x04}),
output: "25fe5f66b43fa4eb7b6717905317cd2223cf841092dc8e0a1e8c75720ad4846cb5d9387303e14bc3c69faa3b1c51ef4b7de1",
shouldfail: false,
},
{
algorithms: []string{"argon2", "argon2$2$65536$8$50"},
password: "abcdef",
salt: hex.EncodeToString([]byte{0x01, 0x02, 0x03, 0x04}),
output: "9c287db63a91d18bb1414b703216da4fc431387c1ae7c8acdb280222f11f0929831055dbfd5126a3b48566692e83ec750d2a",
shouldfail: false,
},
{
algorithms: []string{"pbkdf2", "pbkdf2$10000$50"},
password: "abcdef",
salt: hex.EncodeToString([]byte{0x01, 0x02, 0x03, 0x04}),
output: "45d6cdc843d65cf0eda7b90ab41435762a282f7df013477a1c5b212ba81dbdca2edf1ecc4b5cb05956bb9e0c37ab29315d78",
shouldfail: false,
},
{
algorithms: []string{"pbkdf2$320000$50"},
password: "abcdef",
salt: hex.EncodeToString([]byte{0x01, 0x02, 0x03, 0x04}),
output: "84e233114499e8721da80e85568e5b7b5900b3e49a30845fcda9d1e1756da4547d70f8740ac2b4a5d82f88cebcd27f21bfe2",
shouldfail: false,
},
{
algorithms: []string{"pbkdf2", "pbkdf2$10000$50"},
password: "abcdef",
salt: "",
output: "",
shouldfail: true,
},
}
// Ensure that the current code will correctly verify against the test vectors.
func TestVectors(t *testing.T) {
for i, vector := range vectors {
for _, algorithm := range vector.algorithms {
t.Run(strconv.Itoa(i)+": "+algorithm, func(t *testing.T) {
pa := Parse(algorithm)
assert.Equal(t, !vector.shouldfail, pa.VerifyPassword(vector.password, vector.output, vector.salt))
})
}
}
}
+62
View File
@@ -0,0 +1,62 @@
// Copyright 2023 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package hash
import (
"crypto/sha256"
"encoding/hex"
"strings"
"code.gitea.io/gitea/modules/log"
"golang.org/x/crypto/pbkdf2"
)
func init() {
Register("pbkdf2", NewPBKDF2Hasher)
}
// PBKDF2Hasher implements PasswordHasher
// and uses the PBKDF2 key derivation function.
type PBKDF2Hasher struct {
iter, keyLen int
}
// HashWithSaltBytes a provided password and salt
func (hasher *PBKDF2Hasher) HashWithSaltBytes(password string, salt []byte) string {
if hasher == nil {
return ""
}
return hex.EncodeToString(pbkdf2.Key([]byte(password), salt, hasher.iter, hasher.keyLen, sha256.New))
}
// NewPBKDF2Hasher is a factory method to create an PBKDF2Hasher
// config should be either empty or of the form:
// "<iter>$<keyLen>", where <x> is the string representation
// of an integer
func NewPBKDF2Hasher(config string) *PBKDF2Hasher {
hasher := &PBKDF2Hasher{
iter: 10_000,
keyLen: 50,
}
if config == "" {
return hasher
}
vals := strings.SplitN(config, "$", 2)
if len(vals) != 2 {
log.Error("invalid pbkdf2 hash spec %s", config)
return nil
}
var err error
hasher.iter, err = parseIntParam(vals[0], "iter", "pbkdf2", config, nil)
hasher.keyLen, err = parseIntParam(vals[1], "keyLen", "pbkdf2", config, err)
if err != nil {
return nil
}
return hasher
}
+64
View File
@@ -0,0 +1,64 @@
// Copyright 2023 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package hash
import (
"encoding/hex"
"strings"
"code.gitea.io/gitea/modules/log"
"golang.org/x/crypto/scrypt"
)
func init() {
Register("scrypt", NewScryptHasher)
}
// ScryptHasher implements PasswordHasher
// and uses the scrypt key derivation function.
type ScryptHasher struct {
n, r, p, keyLen int
}
// HashWithSaltBytes a provided password and salt
func (hasher *ScryptHasher) HashWithSaltBytes(password string, salt []byte) string {
if hasher == nil {
return ""
}
hashedPassword, _ := scrypt.Key([]byte(password), salt, hasher.n, hasher.r, hasher.p, hasher.keyLen)
return hex.EncodeToString(hashedPassword)
}
// NewScryptHasher is a factory method to create an ScryptHasher
// The provided config should be either empty or of the form:
// "<n>$<r>$<p>$<keyLen>", where <x> is the string representation
// of an integer
func NewScryptHasher(config string) *ScryptHasher {
hasher := &ScryptHasher{
n: 1 << 16,
r: 16,
p: 2, // 2 passes through memory - this default config will use 128MiB in total.
keyLen: 50,
}
if config == "" {
return hasher
}
vals := strings.SplitN(config, "$", 4)
if len(vals) != 4 {
log.Error("invalid scrypt hash spec %s", config)
return nil
}
var err error
hasher.n, err = parseIntParam(vals[0], "n", "scrypt", config, nil)
hasher.r, err = parseIntParam(vals[1], "r", "scrypt", config, err)
hasher.p, err = parseIntParam(vals[2], "p", "scrypt", config, err)
hasher.keyLen, err = parseIntParam(vals[3], "keyLen", "scrypt", config, err)
if err != nil {
return nil
}
return hasher
}
+44
View File
@@ -0,0 +1,44 @@
// Copyright 2023 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package hash
const DefaultHashAlgorithmName = "pbkdf2"
var DefaultHashAlgorithm *PasswordHashAlgorithm
var aliasAlgorithmNames = map[string]string{
"argon2": "argon2$2$65536$8$50",
"bcrypt": "bcrypt$10",
"scrypt": "scrypt$65536$16$2$50",
"pbkdf2": "pbkdf2_v2", // pbkdf2 should default to pbkdf2_v2
"pbkdf2_v1": "pbkdf2$10000$50",
// The latest PBKDF2 password algorithm is used as the default since it doesn't
// use a lot of memory and is safer to use on less powerful devices.
"pbkdf2_v2": "pbkdf2$50000$50",
// The pbkdf2_hi password algorithm is offered as a stronger alternative to the
// slightly improved pbkdf2_v2 algorithm
"pbkdf2_hi": "pbkdf2$320000$50",
}
var RecommendedHashAlgorithms = []string{
"pbkdf2",
"argon2",
"bcrypt",
"scrypt",
"pbkdf2_hi",
}
func SetDefaultPasswordHashAlgorithm(algorithmName string) (string, *PasswordHashAlgorithm) {
if algorithmName == "" {
algorithmName = DefaultHashAlgorithmName
}
alias, has := aliasAlgorithmNames[algorithmName]
for has {
algorithmName = alias
alias, has = aliasAlgorithmNames[algorithmName]
}
DefaultHashAlgorithm = Parse(algorithmName)
return algorithmName, DefaultHashAlgorithm
}
@@ -0,0 +1,38 @@
// Copyright 2023 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package hash
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestCheckSettingPasswordHashAlgorithm(t *testing.T) {
t.Run("pbkdf2 is pbkdf2_v2", func(t *testing.T) {
pbkdf2v2Config, pbkdf2v2Algo := SetDefaultPasswordHashAlgorithm("pbkdf2_v2")
pbkdf2Config, pbkdf2Algo := SetDefaultPasswordHashAlgorithm("pbkdf2")
assert.Equal(t, pbkdf2v2Config, pbkdf2Config)
assert.Equal(t, pbkdf2v2Algo.Name, pbkdf2Algo.Name)
})
for a, b := range aliasAlgorithmNames {
t.Run(a+"="+b, func(t *testing.T) {
aConfig, aAlgo := SetDefaultPasswordHashAlgorithm(a)
bConfig, bAlgo := SetDefaultPasswordHashAlgorithm(b)
assert.Equal(t, bConfig, aConfig)
assert.Equal(t, aAlgo.Name, bAlgo.Name)
})
}
t.Run("pbkdf2_v2 is the default when default password hash algorithm is empty", func(t *testing.T) {
emptyConfig, emptyAlgo := SetDefaultPasswordHashAlgorithm("")
pbkdf2v2Config, pbkdf2v2Algo := SetDefaultPasswordHashAlgorithm("pbkdf2_v2")
assert.Equal(t, pbkdf2v2Config, emptyConfig)
assert.Equal(t, pbkdf2v2Algo.Name, emptyAlgo.Name)
})
}
@@ -12,8 +12,8 @@ import (
"strings"
"sync"
"code.gitea.io/gitea/modules/context"
"code.gitea.io/gitea/modules/setting"
"code.gitea.io/gitea/modules/translation"
)
// complexity contains information about a particular kind of password complexity
@@ -113,13 +113,13 @@ func Generate(n int) (string, error) {
}
// BuildComplexityError builds the error message when password complexity checks fail
func BuildComplexityError(ctx *context.Context) string {
func BuildComplexityError(locale translation.Locale) string {
var buffer bytes.Buffer
buffer.WriteString(ctx.Tr("form.password_complexity"))
buffer.WriteString(locale.Tr("form.password_complexity"))
buffer.WriteString("<ul>")
for _, c := range requiredList {
buffer.WriteString("<li>")
buffer.WriteString(ctx.Tr(c.TrNameOne))
buffer.WriteString(locale.Tr(c.TrNameOne))
buffer.WriteString("</li>")
}
buffer.WriteString("</ul>")
+1 -5
View File
@@ -45,7 +45,7 @@ func EscapeControlReader(reader io.Reader, writer io.Writer, locale translation.
return streamer.escaped, err
}
// EscapeControlStringReader escapes the unicode control sequences in a provided reader of string content and writer in a locale and returns the findings as an EscapeStatus and the escaped []byte
// EscapeControlStringReader escapes the unicode control sequences in a provided reader of string content and writer in a locale and returns the findings as an EscapeStatus and the escaped []byte. HTML line breaks are not inserted after every newline by this method.
func EscapeControlStringReader(reader io.Reader, writer io.Writer, locale translation.Locale, allowed ...rune) (escaped *EscapeStatus, err error) {
bufRd := bufio.NewReader(reader)
outputStream := &HTMLStreamerWriter{Writer: writer}
@@ -66,10 +66,6 @@ func EscapeControlStringReader(reader io.Reader, writer io.Writer, locale transl
}
break
}
if err := streamer.SelfClosingTag("br"); err != nil {
streamer.escaped.HasError = true
return streamer.escaped, err
}
}
return streamer.escaped, err
}
+7 -17
View File
@@ -7,7 +7,6 @@ package charset
import (
"fmt"
"regexp"
"sort"
"strings"
"unicode"
"unicode/utf8"
@@ -21,12 +20,16 @@ import (
var defaultWordRegexp = regexp.MustCompile(`(-?\d*\.\d\w*)|([^\` + "`" + `\~\!\@\#\$\%\^\&\*\(\)\-\=\+\[\{\]\}\\\|\;\:\'\"\,\.\<\>\/\?\s\x00-\x1f]+)`)
func NewEscapeStreamer(locale translation.Locale, next HTMLStreamer, allowed ...rune) HTMLStreamer {
allowedM := make(map[rune]bool, len(allowed))
for _, v := range allowed {
allowedM[v] = true
}
return &escapeStreamer{
escaped: &EscapeStatus{},
PassthroughHTMLStreamer: *NewPassthroughStreamer(next),
locale: locale,
ambiguousTables: AmbiguousTablesForLocale(locale),
allowed: allowed,
allowed: allowedM,
}
}
@@ -35,7 +38,7 @@ type escapeStreamer struct {
escaped *EscapeStatus
locale translation.Locale
ambiguousTables []*AmbiguousTable
allowed []rune
allowed map[rune]bool
}
func (e *escapeStreamer) EscapeStatus() *EscapeStatus {
@@ -257,7 +260,7 @@ func (e *escapeStreamer) runeTypes(runes ...rune) (types []runeType, confusables
runeCounts.numBrokenRunes++
case r == ' ' || r == '\t' || r == '\n':
runeCounts.numBasicRunes++
case e.isAllowed(r):
case e.allowed[r]:
if r > 0x7e || r < 0x20 {
types[i] = nonBasicASCIIRuneType
runeCounts.numNonConfusingNonBasicRunes++
@@ -283,16 +286,3 @@ func (e *escapeStreamer) runeTypes(runes ...rune) (types []runeType, confusables
}
return types, confusables, runeCounts
}
func (e *escapeStreamer) isAllowed(r rune) bool {
if len(e.allowed) == 0 {
return false
}
if len(e.allowed) == 1 {
return e.allowed[0] == r
}
return sort.Search(len(e.allowed), func(i int) bool {
return e.allowed[i] >= r
}) >= 0
}
+1 -1
View File
@@ -7,8 +7,8 @@ package context
import (
"bytes"
"context"
"html/template"
"net/http"
"text/template"
"time"
"code.gitea.io/gitea/modules/log"
+4 -3
View File
@@ -19,10 +19,11 @@ type Pagination struct {
urlParams []string
}
// NewPagination creates a new instance of the Pagination struct
func NewPagination(total, page, issueNum, numPages int) *Pagination {
// NewPagination creates a new instance of the Pagination struct.
// "pagingNum" is "page size" or "limit", "current" is "page"
func NewPagination(total, pagingNum, current, numPages int) *Pagination {
p := &Pagination{}
p.Paginater = paginator.New(total, page, issueNum, numPages)
p.Paginater = paginator.New(total, pagingNum, current, numPages)
return p
}
+20 -20
View File
@@ -24,12 +24,12 @@ type BlamePart struct {
// BlameReader returns part of file blame one by one
type BlameReader struct {
cmd *exec.Cmd
output io.ReadCloser
reader *bufio.Reader
lastSha *string
cancel context.CancelFunc // Cancels the context that this reader runs in
finished process.FinishedFunc // Tells the process manager we're finished and it can remove the associated process from the process table
cmd *exec.Cmd
reader io.ReadCloser
lastSha *string
cancel context.CancelFunc // Cancels the context that this reader runs in
finished process.FinishedFunc // Tells the process manager we're finished and it can remove the associated process from the process table
bufferedReader *bufio.Reader
}
var shaLineRegex = regexp.MustCompile("^([a-z0-9]{40})")
@@ -38,8 +38,6 @@ var shaLineRegex = regexp.MustCompile("^([a-z0-9]{40})")
func (r *BlameReader) NextPart() (*BlamePart, error) {
var blamePart *BlamePart
reader := r.reader
if r.lastSha != nil {
blamePart = &BlamePart{*r.lastSha, make([]string, 0)}
}
@@ -49,7 +47,7 @@ func (r *BlameReader) NextPart() (*BlamePart, error) {
var err error
for err != io.EOF {
line, isPrefix, err = reader.ReadLine()
line, isPrefix, err = r.bufferedReader.ReadLine()
if err != nil && err != io.EOF {
return blamePart, err
}
@@ -71,7 +69,7 @@ func (r *BlameReader) NextPart() (*BlamePart, error) {
r.lastSha = &sha1
// need to munch to end of line...
for isPrefix {
_, isPrefix, err = reader.ReadLine()
_, isPrefix, err = r.bufferedReader.ReadLine()
if err != nil && err != io.EOF {
return blamePart, err
}
@@ -86,7 +84,7 @@ func (r *BlameReader) NextPart() (*BlamePart, error) {
// need to munch to end of line...
for isPrefix {
_, isPrefix, err = reader.ReadLine()
_, isPrefix, err = r.bufferedReader.ReadLine()
if err != nil && err != io.EOF {
return blamePart, err
}
@@ -102,9 +100,9 @@ func (r *BlameReader) NextPart() (*BlamePart, error) {
func (r *BlameReader) Close() error {
defer r.finished() // Only remove the process from the process table when the underlying command is closed
r.cancel() // However, first cancel our own context early
r.bufferedReader = nil
_ = r.output.Close()
_ = r.reader.Close()
if err := r.cmd.Wait(); err != nil {
return fmt.Errorf("Wait: %w", err)
}
@@ -126,25 +124,27 @@ func createBlameReader(ctx context.Context, dir string, command ...string) (*Bla
cmd.Stderr = os.Stderr
process.SetSysProcAttribute(cmd)
stdout, err := cmd.StdoutPipe()
reader, stdout, err := os.Pipe()
if err != nil {
defer finished()
return nil, fmt.Errorf("StdoutPipe: %w", err)
}
cmd.Stdout = stdout
if err = cmd.Start(); err != nil {
defer finished()
_ = stdout.Close()
return nil, fmt.Errorf("Start: %w", err)
}
_ = stdout.Close()
reader := bufio.NewReader(stdout)
bufferedReader := bufio.NewReader(reader)
return &BlameReader{
cmd: cmd,
output: stdout,
reader: reader,
cancel: cancel,
finished: finished,
cmd: cmd,
reader: reader,
cancel: cancel,
finished: finished,
bufferedReader: bufferedReader,
}, nil
}
+2 -4
View File
@@ -65,7 +65,7 @@ summary Add code of delete user
previous be0ba9ea88aff8a658d0495d36accf944b74888d gogs.go
filename gogs.go
// license that can be found in the LICENSE file.
` + `
e2aa991e10ffd924a828ec149951f2f20eecead2 6 6 2
author Lunny Xiao
author-mail <xiaolunwen@gmail.com>
@@ -112,9 +112,7 @@ func TestReadingBlameOutput(t *testing.T) {
},
{
"ce21ed6c3490cdfad797319cbb1145e2330a8fef",
[]string{
"// Copyright 2016 The Gitea Authors. All rights reserved.",
},
[]string{"// Copyright 2016 The Gitea Authors. All rights reserved."},
},
{
"4b92a6c2df28054ad766bc262f308db9f6066596",
+1 -1
View File
@@ -132,7 +132,7 @@ func CommitChangesWithArgs(repoPath string, args []CmdArg, opts CommitChangesOpt
if opts.Author != nil {
cmd.AddArguments(CmdArg(fmt.Sprintf("--author='%s <%s>'", opts.Author.Name, opts.Author.Email)))
}
cmd.AddArguments("-m").AddDynamicArguments(opts.Message)
cmd.AddArguments(CmdArg("--message=" + opts.Message))
_, _, err := cmd.RunStdString(&RunOpts{Dir: repoPath})
// No stderr but exit status 1 means nothing to commit.
+4 -4
View File
@@ -313,7 +313,7 @@ func CheckGitVersionAtLeast(atLeast string) error {
}
func configSet(key, value string) error {
stdout, _, err := NewCommand(DefaultContext, "config", "--get").AddDynamicArguments(key).RunStdString(nil)
stdout, _, err := NewCommand(DefaultContext, "config", "--global", "--get").AddDynamicArguments(key).RunStdString(nil)
if err != nil && !err.IsExitCode(1) {
return fmt.Errorf("failed to get git config %s, err: %w", key, err)
}
@@ -332,7 +332,7 @@ func configSet(key, value string) error {
}
func configSetNonExist(key, value string) error {
_, _, err := NewCommand(DefaultContext, "config", "--get").AddDynamicArguments(key).RunStdString(nil)
_, _, err := NewCommand(DefaultContext, "config", "--global", "--get").AddDynamicArguments(key).RunStdString(nil)
if err == nil {
// already exist
return nil
@@ -350,7 +350,7 @@ func configSetNonExist(key, value string) error {
}
func configAddNonExist(key, value string) error {
_, _, err := NewCommand(DefaultContext, "config", "--get").AddDynamicArguments(key, regexp.QuoteMeta(value)).RunStdString(nil)
_, _, err := NewCommand(DefaultContext, "config", "--global", "--get").AddDynamicArguments(key, regexp.QuoteMeta(value)).RunStdString(nil)
if err == nil {
// already exist
return nil
@@ -367,7 +367,7 @@ func configAddNonExist(key, value string) error {
}
func configUnsetAll(key, value string) error {
_, _, err := NewCommand(DefaultContext, "config", "--get").AddDynamicArguments(key).RunStdString(nil)
_, _, err := NewCommand(DefaultContext, "config", "--global", "--get").AddDynamicArguments(key).RunStdString(nil)
if err == nil {
// exist, need to remove
_, _, err = NewCommand(DefaultContext, "config", "--global", "--unset-all").AddDynamicArguments(key, regexp.QuoteMeta(value)).RunStdString(nil)
+2 -4
View File
@@ -164,10 +164,8 @@ func CloneWithArgs(ctx context.Context, args []CmdArg, from, to string, opts Clo
envs := os.Environ()
u, err := url.Parse(from)
if err == nil && (strings.EqualFold(u.Scheme, "http") || strings.EqualFold(u.Scheme, "https")) {
if proxy.Match(u.Host) {
envs = append(envs, fmt.Sprintf("https_proxy=%s", proxy.GetProxyURL()))
}
if err == nil {
envs = proxy.EnvWithProxy(u)
}
stderr := new(bytes.Buffer)
+9 -2
View File
@@ -282,11 +282,18 @@ func (repo *Repository) GetPatch(base, head string, w io.Writer) error {
// GetFilesChangedBetween returns a list of all files that have been changed between the given commits
func (repo *Repository) GetFilesChangedBetween(base, head string) ([]string, error) {
stdout, _, err := NewCommand(repo.Ctx, "diff", "--name-only").AddDynamicArguments(base + ".." + head).RunStdString(&RunOpts{Dir: repo.Path})
stdout, _, err := NewCommand(repo.Ctx, "diff", "--name-only", "-z").AddDynamicArguments(base + ".." + head).RunStdString(&RunOpts{Dir: repo.Path})
if err != nil {
return nil, err
}
return strings.Split(stdout, "\n"), err
split := strings.Split(stdout, "\000")
// Because Git will always emit filenames with a terminal NUL ignore the last entry in the split - which will always be empty.
if len(split) > 0 {
split = split[:len(split)-1]
}
return split, err
}
// GetDiffFromMergeBase generates and return patch data from merge base to head
+2 -2
View File
@@ -5,7 +5,6 @@
package lfs
import (
"fmt"
"net/url"
"os"
"path"
@@ -13,6 +12,7 @@ import (
"strings"
"code.gitea.io/gitea/modules/log"
"code.gitea.io/gitea/modules/util"
)
// DetermineEndpoint determines an endpoint from the clone url or uses the specified LFS url.
@@ -96,7 +96,7 @@ func endpointFromLocalPath(path string) *url.URL {
return nil
}
path = fmt.Sprintf("file://%s%s", slash, filepath.ToSlash(path))
path = "file://" + slash + util.PathEscapeSegments(filepath.ToSlash(path))
u, _ := url.Parse(path)
+7
View File
@@ -384,6 +384,13 @@ func (cv *ColoredValue) Format(s fmt.State, c rune) {
s.Write(*cv.resetBytes)
}
// ColorFormatAsString returns the result of the ColorFormat without the color
func ColorFormatAsString(colorVal ColorFormatted) string {
s := new(strings.Builder)
_, _ = ColorFprintf(&protectedANSIWriter{w: s, mode: removeColor}, "%-v", colorVal)
return s.String()
}
// SetColorBytes will allow a user to set the colorBytes of a colored value
func (cv *ColoredValue) SetColorBytes(colorBytes []byte) {
cv.colorBytes = &colorBytes
+1 -1
View File
@@ -7,7 +7,7 @@ package log
import "unsafe"
//go:linkname runtime_getProfLabel runtime/pprof.runtime_getProfLabel
func runtime_getProfLabel() unsafe.Pointer // nolint
func runtime_getProfLabel() unsafe.Pointer //nolint
type labelMap map[string]string
+11
View File
@@ -10,6 +10,8 @@ import (
"runtime"
"strings"
"sync"
"code.gitea.io/gitea/modules/process"
)
type loggerMap struct {
@@ -286,6 +288,15 @@ func (l *LoggerAsWriter) Log(msg string) {
}
func init() {
process.Trace = func(start bool, pid process.IDType, description string, parentPID process.IDType, typ string) {
if start && parentPID != "" {
Log(1, TRACE, "Start %s: %s (from %s) (%s)", NewColoredValue(pid, FgHiYellow), description, NewColoredValue(parentPID, FgYellow), NewColoredValue(typ, Reset))
} else if start {
Log(1, TRACE, "Start %s: %s (%s)", NewColoredValue(pid, FgHiYellow), description, NewColoredValue(typ, Reset))
} else {
Log(1, TRACE, "Done %s: %s", NewColoredValue(pid, FgHiYellow), NewColoredValue(description, Reset))
}
}
_, filename, _, _ := runtime.Caller(0)
prefix = strings.TrimSuffix(filename, "modules/log/log.go")
if prefix == filename {
+9 -2
View File
@@ -358,12 +358,19 @@ func postProcess(ctx *RenderContext, procs []processor, input io.Reader, output
}
func visitNode(ctx *RenderContext, procs, textProcs []processor, node *html.Node) {
// Add user-content- to IDs if they don't already have them
// Add user-content- to IDs and "#" links if they don't already have them
for idx, attr := range node.Attr {
if attr.Key == "id" && !(strings.HasPrefix(attr.Val, "user-content-") || blackfridayExtRegex.MatchString(attr.Val)) {
val := strings.TrimPrefix(attr.Val, "#")
notHasPrefix := !(strings.HasPrefix(val, "user-content-") || blackfridayExtRegex.MatchString(val))
if attr.Key == "id" && notHasPrefix {
node.Attr[idx].Val = "user-content-" + attr.Val
}
if attr.Key == "href" && strings.HasPrefix(attr.Val, "#") && notHasPrefix {
node.Attr[idx].Val = "#user-content-" + val
}
if attr.Key == "class" && attr.Val == "emoji" {
textProcs = nil
}
+1 -2
View File
@@ -9,8 +9,7 @@ import "time"
// Commentable can be commented upon
type Commentable interface {
GetLocalIndex() int64
GetForeignIndex() int64
Reviewable
GetContext() DownloaderContext
}
+11 -2
View File
@@ -35,6 +35,15 @@ func (issue *Issue) GetExternalName() string { return issue.PosterName }
// GetExternalID ExternalUserMigrated interface
func (issue *Issue) GetExternalID() int64 { return issue.PosterID }
func (issue *Issue) GetLocalIndex() int64 { return issue.Number }
func (issue *Issue) GetForeignIndex() int64 { return issue.ForeignIndex }
func (issue *Issue) GetLocalIndex() int64 { return issue.Number }
func (issue *Issue) GetForeignIndex() int64 {
// see the comment of Reviewable.GetForeignIndex
// if there is no ForeignIndex, then use LocalIndex
if issue.ForeignIndex == 0 {
return issue.Number
}
return issue.ForeignIndex
}
func (issue *Issue) GetContext() DownloaderContext { return issue.Context }
+11 -1
View File
@@ -9,6 +9,16 @@ import "time"
// Reviewable can be reviewed
type Reviewable interface {
GetLocalIndex() int64
// GetForeignIndex presents the foreign index, which could be misused:
// For example, if there are 2 Gitea sites: site-A exports a dataset, then site-B imports it:
// * if site-A exports files by using its LocalIndex
// * from site-A's view, LocalIndex is site-A's IssueIndex while ForeignIndex is site-B's IssueIndex
// * but from site-B's view, LocalIndex is site-B's IssueIndex while ForeignIndex is site-A's IssueIndex
//
// So the exporting/importing must be paired, but the meaning of them looks confusing then:
// * either site-A and site-B both use LocalIndex during dumping/restoring
// * or site-A and site-B both use ForeignIndex
GetForeignIndex() int64
}
@@ -38,7 +48,7 @@ type Review struct {
// GetExternalName ExternalUserMigrated interface
func (r *Review) GetExternalName() string { return r.ReviewerName }
// ExternalID ExternalUserMigrated interface
// GetExternalID ExternalUserMigrated interface
func (r *Review) GetExternalID() int64 { return r.ReviewerID }
// ReviewComment represents a review comment
+1
View File
@@ -96,6 +96,7 @@ func (ns *notificationService) NotifyIssueChangeStatus(doer *user_model.User, is
_ = ns.issueQueue.Push(issueNotificationOpts{
IssueID: issue.ID,
NotificationAuthorID: doer.ID,
CommentID: actionComment.ID,
})
}
+4 -3
View File
@@ -11,8 +11,9 @@ import (
"code.gitea.io/gitea/modules/json"
"code.gitea.io/gitea/modules/packages/container/helm"
"code.gitea.io/gitea/modules/packages/container/oci"
"code.gitea.io/gitea/modules/validation"
oci "github.com/opencontainers/image-spec/specs-go/v1"
)
const (
@@ -66,8 +67,8 @@ type Metadata struct {
}
// ParseImageConfig parses the metadata of an image config
func ParseImageConfig(mediaType oci.MediaType, r io.Reader) (*Metadata, error) {
if strings.EqualFold(string(mediaType), helm.ConfigMediaType) {
func ParseImageConfig(mt string, r io.Reader) (*Metadata, error) {
if strings.EqualFold(mt, helm.ConfigMediaType) {
return parseHelmConfig(r)
}
+3 -3
View File
@@ -9,8 +9,8 @@ import (
"testing"
"code.gitea.io/gitea/modules/packages/container/helm"
"code.gitea.io/gitea/modules/packages/container/oci"
oci "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/stretchr/testify/assert"
)
@@ -24,7 +24,7 @@ func TestParseImageConfig(t *testing.T) {
configOCI := `{"config": {"labels": {"` + labelAuthors + `": "` + author + `", "` + labelLicenses + `": "` + license + `", "` + labelURL + `": "` + projectURL + `", "` + labelSource + `": "` + repositoryURL + `", "` + labelDocumentation + `": "` + documentationURL + `", "` + labelDescription + `": "` + description + `"}}, "history": [{"created_by": "do it 1"}, {"created_by": "dummy #(nop) do it 2"}]}`
metadata, err := ParseImageConfig(oci.MediaType(oci.MediaTypeImageManifest), strings.NewReader(configOCI))
metadata, err := ParseImageConfig(oci.MediaTypeImageManifest, strings.NewReader(configOCI))
assert.NoError(t, err)
assert.Equal(t, TypeOCI, metadata.Type)
@@ -51,7 +51,7 @@ func TestParseImageConfig(t *testing.T) {
configHelm := `{"description":"` + description + `", "home": "` + projectURL + `", "sources": ["` + repositoryURL + `"], "maintainers":[{"name":"` + author + `"}]}`
metadata, err = ParseImageConfig(oci.MediaType(helm.ConfigMediaType), strings.NewReader(configHelm))
metadata, err = ParseImageConfig(helm.ConfigMediaType, strings.NewReader(configHelm))
assert.NoError(t, err)
assert.Equal(t, TypeHelm, metadata.Type)
-27
View File
@@ -1,27 +0,0 @@
// Copyright 2022 The Gitea Authors. All rights reserved.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
package oci
import (
"regexp"
"strings"
)
var digestPattern = regexp.MustCompile(`\Asha256:[a-f0-9]{64}\z`)
type Digest string
// Validate checks if the digest has a valid SHA256 signature
func (d Digest) Validate() bool {
return digestPattern.MatchString(string(d))
}
func (d Digest) Hash() string {
p := strings.SplitN(string(d), ":", 2)
if len(p) != 2 {
return ""
}
return p[1]
}
@@ -1,36 +0,0 @@
// Copyright 2022 The Gitea Authors. All rights reserved.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
package oci
import (
"strings"
)
const (
MediaTypeImageManifest = "application/vnd.oci.image.manifest.v1+json"
MediaTypeImageIndex = "application/vnd.oci.image.index.v1+json"
MediaTypeDockerManifest = "application/vnd.docker.distribution.manifest.v2+json"
MediaTypeDockerManifestList = "application/vnd.docker.distribution.manifest.list.v2+json"
)
type MediaType string
// IsValid tests if the media type is in the OCI or Docker namespace
func (m MediaType) IsValid() bool {
s := string(m)
return strings.HasPrefix(s, "application/vnd.docker.") || strings.HasPrefix(s, "application/vnd.oci.")
}
// IsImageManifest tests if the media type is an image manifest
func (m MediaType) IsImageManifest() bool {
s := string(m)
return strings.EqualFold(s, MediaTypeDockerManifest) || strings.EqualFold(s, MediaTypeImageManifest)
}
// IsImageIndex tests if the media type is an image index
func (m MediaType) IsImageIndex() bool {
s := string(m)
return strings.EqualFold(s, MediaTypeDockerManifestList) || strings.EqualFold(s, MediaTypeImageIndex)
}
-191
View File
@@ -1,191 +0,0 @@
// Copyright 2022 The Gitea Authors. All rights reserved.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
package oci
import (
"time"
)
// https://github.com/opencontainers/image-spec/tree/main/specs-go/v1
// ImageConfig defines the execution parameters which should be used as a base when running a container using an image.
type ImageConfig struct {
// User defines the username or UID which the process in the container should run as.
User string `json:"User,omitempty"`
// ExposedPorts a set of ports to expose from a container running this image.
ExposedPorts map[string]struct{} `json:"ExposedPorts,omitempty"`
// Env is a list of environment variables to be used in a container.
Env []string `json:"Env,omitempty"`
// Entrypoint defines a list of arguments to use as the command to execute when the container starts.
Entrypoint []string `json:"Entrypoint,omitempty"`
// Cmd defines the default arguments to the entrypoint of the container.
Cmd []string `json:"Cmd,omitempty"`
// Volumes is a set of directories describing where the process is likely write data specific to a container instance.
Volumes map[string]struct{} `json:"Volumes,omitempty"`
// WorkingDir sets the current working directory of the entrypoint process in the container.
WorkingDir string `json:"WorkingDir,omitempty"`
// Labels contains arbitrary metadata for the container.
Labels map[string]string `json:"Labels,omitempty"`
// StopSignal contains the system call signal that will be sent to the container to exit.
StopSignal string `json:"StopSignal,omitempty"`
}
// RootFS describes a layer content addresses
type RootFS struct {
// Type is the type of the rootfs.
Type string `json:"type"`
// DiffIDs is an array of layer content hashes, in order from bottom-most to top-most.
DiffIDs []string `json:"diff_ids"`
}
// History describes the history of a layer.
type History struct {
// Created is the combined date and time at which the layer was created, formatted as defined by RFC 3339, section 5.6.
Created *time.Time `json:"created,omitempty"`
// CreatedBy is the command which created the layer.
CreatedBy string `json:"created_by,omitempty"`
// Author is the author of the build point.
Author string `json:"author,omitempty"`
// Comment is a custom message set when creating the layer.
Comment string `json:"comment,omitempty"`
// EmptyLayer is used to mark if the history item created a filesystem diff.
EmptyLayer bool `json:"empty_layer,omitempty"`
}
// Image is the JSON structure which describes some basic information about the image.
// This provides the `application/vnd.oci.image.config.v1+json` mediatype when marshalled to JSON.
type Image struct {
// Created is the combined date and time at which the image was created, formatted as defined by RFC 3339, section 5.6.
Created *time.Time `json:"created,omitempty"`
// Author defines the name and/or email address of the person or entity which created and is responsible for maintaining the image.
Author string `json:"author,omitempty"`
// Architecture is the CPU architecture which the binaries in this image are built to run on.
Architecture string `json:"architecture"`
// Variant is the variant of the specified CPU architecture which image binaries are intended to run on.
Variant string `json:"variant,omitempty"`
// OS is the name of the operating system which the image is built to run on.
OS string `json:"os"`
// OSVersion is an optional field specifying the operating system
// version, for example on Windows `10.0.14393.1066`.
OSVersion string `json:"os.version,omitempty"`
// OSFeatures is an optional field specifying an array of strings,
// each listing a required OS feature (for example on Windows `win32k`).
OSFeatures []string `json:"os.features,omitempty"`
// Config defines the execution parameters which should be used as a base when running a container using the image.
Config ImageConfig `json:"config,omitempty"`
// RootFS references the layer content addresses used by the image.
RootFS RootFS `json:"rootfs"`
// History describes the history of each layer.
History []History `json:"history,omitempty"`
}
// Descriptor describes the disposition of targeted content.
// This structure provides `application/vnd.oci.descriptor.v1+json` mediatype
// when marshalled to JSON.
type Descriptor struct {
// MediaType is the media type of the object this schema refers to.
MediaType MediaType `json:"mediaType,omitempty"`
// Digest is the digest of the targeted content.
Digest Digest `json:"digest"`
// Size specifies the size in bytes of the blob.
Size int64 `json:"size"`
// URLs specifies a list of URLs from which this object MAY be downloaded
URLs []string `json:"urls,omitempty"`
// Annotations contains arbitrary metadata relating to the targeted content.
Annotations map[string]string `json:"annotations,omitempty"`
// Data is an embedding of the targeted content. This is encoded as a base64
// string when marshalled to JSON (automatically, by encoding/json). If
// present, Data can be used directly to avoid fetching the targeted content.
Data []byte `json:"data,omitempty"`
// Platform describes the platform which the image in the manifest runs on.
//
// This should only be used when referring to a manifest.
Platform *Platform `json:"platform,omitempty"`
}
// Platform describes the platform which the image in the manifest runs on.
type Platform struct {
// Architecture field specifies the CPU architecture, for example
// `amd64` or `ppc64`.
Architecture string `json:"architecture"`
// OS specifies the operating system, for example `linux` or `windows`.
OS string `json:"os"`
// OSVersion is an optional field specifying the operating system
// version, for example on Windows `10.0.14393.1066`.
OSVersion string `json:"os.version,omitempty"`
// OSFeatures is an optional field specifying an array of strings,
// each listing a required OS feature (for example on Windows `win32k`).
OSFeatures []string `json:"os.features,omitempty"`
// Variant is an optional field specifying a variant of the CPU, for
// example `v7` to specify ARMv7 when architecture is `arm`.
Variant string `json:"variant,omitempty"`
}
type SchemaMediaBase struct {
// SchemaVersion is the image manifest schema that this image follows
SchemaVersion int `json:"schemaVersion"`
// MediaType specifies the type of this document data structure e.g. `application/vnd.oci.image.manifest.v1+json`
MediaType MediaType `json:"mediaType,omitempty"`
}
// Manifest provides `application/vnd.oci.image.manifest.v1+json` mediatype structure when marshalled to JSON.
type Manifest struct {
SchemaMediaBase
// Config references a configuration object for a container, by digest.
// The referenced configuration object is a JSON blob that the runtime uses to set up the container.
Config Descriptor `json:"config"`
// Layers is an indexed list of layers referenced by the manifest.
Layers []Descriptor `json:"layers"`
// Annotations contains arbitrary metadata for the image manifest.
Annotations map[string]string `json:"annotations,omitempty"`
}
// Index references manifests for various platforms.
// This structure provides `application/vnd.oci.image.index.v1+json` mediatype when marshalled to JSON.
type Index struct {
SchemaMediaBase
// Manifests references platform specific manifests.
Manifests []Descriptor `json:"manifests"`
// Annotations contains arbitrary metadata for the image index.
Annotations map[string]string `json:"annotations,omitempty"`
}
@@ -1,17 +0,0 @@
// Copyright 2022 The Gitea Authors. All rights reserved.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
package oci
import (
"regexp"
)
var referencePattern = regexp.MustCompile(`\A[a-zA-Z0-9_][a-zA-Z0-9._-]{0,127}\z`)
type Reference string
func (r Reference) Validate() bool {
return referencePattern.MatchString(string(r))
}
+15 -7
View File
@@ -7,6 +7,7 @@ package process
import (
"context"
"log"
"runtime/pprof"
"strconv"
"sync"
@@ -44,6 +45,18 @@ type IDType string
// - it is simply an alias for context.CancelFunc and is only for documentary purposes
type FinishedFunc = context.CancelFunc
var Trace = defaultTrace // this global can be overridden by particular logging packages - thus avoiding import cycles
func defaultTrace(start bool, pid IDType, description string, parentPID IDType, typ string) {
if start && parentPID != "" {
log.Printf("start process %s: %s (from %s) (%s)", pid, description, parentPID, typ)
} else if start {
log.Printf("start process %s: %s (%s)", pid, description, typ)
} else {
log.Printf("end process %s: %s", pid, description)
}
}
// Manager manages all processes and counts PIDs.
type Manager struct {
mutex sync.Mutex
@@ -155,6 +168,7 @@ func (pm *Manager) Add(ctx context.Context, description string, cancel context.C
pm.processMap[pid] = process
pm.mutex.Unlock()
Trace(true, pid, description, parentPID, processType)
pprofCtx := pprof.WithLabels(ctx, pprof.Labels(DescriptionPProfLabel, description, PPIDPProfLabel, string(parentPID), PIDPProfLabel, string(pid), ProcessTypePProfLabel, processType))
if currentlyRunning {
@@ -186,18 +200,12 @@ func (pm *Manager) nextPID() (start time.Time, pid IDType) {
return start, pid
}
// Remove a process from the ProcessManager.
func (pm *Manager) Remove(pid IDType) {
pm.mutex.Lock()
delete(pm.processMap, pid)
pm.mutex.Unlock()
}
func (pm *Manager) remove(process *process) {
pm.mutex.Lock()
defer pm.mutex.Unlock()
if p := pm.processMap[process.PID]; p == process {
delete(pm.processMap, process.PID)
Trace(false, process.PID, process.Description, process.ParentPID, process.Type)
}
}
+1 -1
View File
@@ -83,7 +83,7 @@ func TestManager_Remove(t *testing.T) {
assert.NotEqual(t, GetContext(p1Ctx).GetPID(), GetContext(p2Ctx).GetPID(), "expected to get different pids got %s == %s", GetContext(p2Ctx).GetPID(), GetContext(p1Ctx).GetPID())
pm.Remove(GetPID(p2Ctx))
finished()
_, exists := pm.processMap[GetPID(p2Ctx)]
assert.False(t, exists, "PID %d is in the list but shouldn't", GetPID(p2Ctx))
+14
View File
@@ -8,6 +8,7 @@ import (
"net/http"
"net/url"
"os"
"strings"
"sync"
"code.gitea.io/gitea/modules/log"
@@ -83,3 +84,16 @@ func Proxy() func(req *http.Request) (*url.URL, error) {
return http.ProxyFromEnvironment(req)
}
}
// EnvWithProxy returns os.Environ(), with a https_proxy env, if the given url
// needs to be proxied.
func EnvWithProxy(u *url.URL) []string {
envs := os.Environ()
if strings.EqualFold(u.Scheme, "http") || strings.EqualFold(u.Scheme, "https") {
if Match(u.Host) {
envs = append(envs, "https_proxy="+GetProxyURL())
}
}
return envs
}
+4 -1
View File
@@ -125,7 +125,10 @@ func (q *ChannelQueue) Shutdown() {
log.Trace("ChannelQueue: %s Flushing", q.name)
// We can't use Cleanup here because that will close the channel
if err := q.FlushWithContext(q.terminateCtx); err != nil {
log.Warn("ChannelQueue: %s Terminated before completed flushing", q.name)
count := atomic.LoadInt64(&q.numInQueue)
if count > 0 {
log.Warn("ChannelQueue: %s Terminated before completed flushing", q.name)
}
return
}
log.Debug("ChannelQueue: %s Flushed", q.name)
+22 -7
View File
@@ -95,7 +95,8 @@ func NewPersistableChannelQueue(handle HandlerFunc, cfg, exemplar interface{}) (
},
Workers: 0,
},
DataDir: config.DataDir,
DataDir: config.DataDir,
QueueName: config.Name + "-level",
}
levelQueue, err := NewLevelQueue(wrappedHandle, levelCfg, exemplar)
@@ -173,16 +174,18 @@ func (q *PersistableChannelQueue) Run(atShutdown, atTerminate func(func())) {
atShutdown(q.Shutdown)
atTerminate(q.Terminate)
if lq, ok := q.internal.(*LevelQueue); ok && lq.byteFIFO.Len(lq.shutdownCtx) != 0 {
if lq, ok := q.internal.(*LevelQueue); ok && lq.byteFIFO.Len(lq.terminateCtx) != 0 {
// Just run the level queue - we shut it down once it's flushed
go q.internal.Run(func(_ func()) {}, func(_ func()) {})
go func() {
for !q.IsEmpty() {
_ = q.internal.Flush(0)
for !lq.IsEmpty() {
_ = lq.Flush(0)
select {
case <-time.After(100 * time.Millisecond):
case <-q.internal.(*LevelQueue).shutdownCtx.Done():
log.Warn("LevelQueue: %s shut down before completely flushed", q.internal.(*LevelQueue).Name())
case <-lq.shutdownCtx.Done():
if lq.byteFIFO.Len(lq.terminateCtx) > 0 {
log.Warn("LevelQueue: %s shut down before completely flushed", q.internal.(*LevelQueue).Name())
}
return
}
}
@@ -317,10 +320,22 @@ func (q *PersistableChannelQueue) Shutdown() {
// Redirect all remaining data in the chan to the internal channel
log.Trace("PersistableChannelQueue: %s Redirecting remaining data", q.delayedStarter.name)
close(q.channelQueue.dataChan)
countOK, countLost := 0, 0
for data := range q.channelQueue.dataChan {
_ = q.internal.Push(data)
err := q.internal.Push(data)
if err != nil {
log.Error("PersistableChannelQueue: %s Unable redirect %v due to: %v", q.delayedStarter.name, data, err)
countLost++
} else {
countOK++
}
atomic.AddInt64(&q.channelQueue.numInQueue, -1)
}
if countLost > 0 {
log.Warn("PersistableChannelQueue: %s %d will be restored on restart, %d lost", q.delayedStarter.name, countOK, countLost)
} else if countOK > 0 {
log.Warn("PersistableChannelQueue: %s %d will be restored on restart", q.delayedStarter.name, countOK)
}
log.Trace("PersistableChannelQueue: %s Done Redirecting remaining data", q.delayedStarter.name)
log.Debug("PersistableChannelQueue: %s Shutdown", q.delayedStarter.name)
+4 -4
View File
@@ -40,7 +40,7 @@ func TestPersistableChannelQueue(t *testing.T) {
Workers: 1,
BoostWorkers: 0,
MaxWorkers: 10,
Name: "first",
Name: "test-queue",
}, &testData{})
assert.NoError(t, err)
@@ -136,7 +136,7 @@ func TestPersistableChannelQueue(t *testing.T) {
Workers: 1,
BoostWorkers: 0,
MaxWorkers: 10,
Name: "second",
Name: "test-queue",
}, &testData{})
assert.NoError(t, err)
@@ -228,7 +228,7 @@ func TestPersistableChannelQueue_Pause(t *testing.T) {
Workers: 1,
BoostWorkers: 0,
MaxWorkers: 10,
Name: "first",
Name: "test-queue",
}, &testData{})
assert.NoError(t, err)
@@ -434,7 +434,7 @@ func TestPersistableChannelQueue_Pause(t *testing.T) {
Workers: 1,
BoostWorkers: 0,
MaxWorkers: 10,
Name: "second",
Name: "test-queue",
}, &testData{})
assert.NoError(t, err)
pausable, ok = queue.(Pausable)
+3 -1
View File
@@ -178,7 +178,9 @@ func (q *ChannelUniqueQueue) Shutdown() {
go func() {
log.Trace("ChannelUniqueQueue: %s Flushing", q.name)
if err := q.FlushWithContext(q.terminateCtx); err != nil {
log.Warn("ChannelUniqueQueue: %s Terminated before completed flushing", q.name)
if !q.IsEmpty() {
log.Warn("ChannelUniqueQueue: %s Terminated before completed flushing", q.name)
}
return
}
log.Debug("ChannelUniqueQueue: %s Flushed", q.name)
@@ -9,10 +9,13 @@ import (
"testing"
"time"
"code.gitea.io/gitea/modules/log"
"github.com/stretchr/testify/assert"
)
func TestChannelUniqueQueue(t *testing.T) {
_ = log.NewLogger(1000, "console", "console", `{"level":"warn","stacktracelevel":"NONE","stderr":true}`)
handleChan := make(chan *testData)
handle := func(data ...Data) []Data {
for _, datum := range data {
@@ -53,6 +56,8 @@ func TestChannelUniqueQueue(t *testing.T) {
}
func TestChannelUniqueQueue_Batch(t *testing.T) {
_ = log.NewLogger(1000, "console", "console", `{"level":"warn","stacktracelevel":"NONE","stderr":true}`)
handleChan := make(chan *testData)
handle := func(data ...Data) []Data {
for _, datum := range data {
@@ -99,6 +104,8 @@ func TestChannelUniqueQueue_Batch(t *testing.T) {
}
func TestChannelUniqueQueue_Pause(t *testing.T) {
_ = log.NewLogger(1000, "console", "console", `{"level":"warn","stacktracelevel":"NONE","stderr":true}`)
lock := sync.Mutex{}
var queue Queue
var err error
+33 -8
View File
@@ -95,7 +95,8 @@ func NewPersistableChannelUniqueQueue(handle HandlerFunc, cfg, exemplar interfac
},
Workers: 0,
},
DataDir: config.DataDir,
DataDir: config.DataDir,
QueueName: config.Name + "-level",
}
queue.channelQueue = channelUniqueQueue.(*ChannelUniqueQueue)
@@ -210,17 +211,29 @@ func (q *PersistableChannelUniqueQueue) Run(atShutdown, atTerminate func(func())
atTerminate(q.Terminate)
_ = q.channelQueue.AddWorkers(q.channelQueue.workers, 0)
if luq, ok := q.internal.(*LevelUniqueQueue); ok && luq.ByteFIFOUniqueQueue.byteFIFO.Len(luq.shutdownCtx) != 0 {
if luq, ok := q.internal.(*LevelUniqueQueue); ok && !luq.IsEmpty() {
// Just run the level queue - we shut it down once it's flushed
go q.internal.Run(func(_ func()) {}, func(_ func()) {})
go luq.Run(func(_ func()) {}, func(_ func()) {})
go func() {
_ = q.internal.Flush(0)
log.Debug("LevelUniqueQueue: %s flushed so shutting down", q.internal.(*LevelUniqueQueue).Name())
q.internal.(*LevelUniqueQueue).Shutdown()
GetManager().Remove(q.internal.(*LevelUniqueQueue).qid)
_ = luq.Flush(0)
for !luq.IsEmpty() {
_ = luq.Flush(0)
select {
case <-time.After(100 * time.Millisecond):
case <-luq.shutdownCtx.Done():
if luq.byteFIFO.Len(luq.terminateCtx) > 0 {
log.Warn("LevelUniqueQueue: %s shut down before completely flushed", luq.Name())
}
return
}
}
log.Debug("LevelUniqueQueue: %s flushed so shutting down", luq.Name())
luq.Shutdown()
GetManager().Remove(luq.qid)
}()
} else {
log.Debug("PersistableChannelUniqueQueue: %s Skipping running the empty level queue", q.delayedStarter.name)
_ = q.internal.Flush(0)
q.internal.(*LevelUniqueQueue).Shutdown()
GetManager().Remove(q.internal.(*LevelUniqueQueue).qid)
}
@@ -286,8 +299,20 @@ func (q *PersistableChannelUniqueQueue) Shutdown() {
// Redirect all remaining data in the chan to the internal channel
close(q.channelQueue.dataChan)
log.Trace("PersistableChannelUniqueQueue: %s Redirecting remaining data", q.delayedStarter.name)
countOK, countLost := 0, 0
for data := range q.channelQueue.dataChan {
_ = q.internal.Push(data)
err := q.internal.(*LevelUniqueQueue).Push(data)
if err != nil {
log.Error("PersistableChannelUniqueQueue: %s Unable redirect %v due to: %v", q.delayedStarter.name, data, err)
countLost++
} else {
countOK++
}
}
if countLost > 0 {
log.Warn("PersistableChannelUniqueQueue: %s %d will be restored on restart, %d lost", q.delayedStarter.name, countOK, countLost)
} else if countOK > 0 {
log.Warn("PersistableChannelUniqueQueue: %s %d will be restored on restart", q.delayedStarter.name, countOK)
}
log.Trace("PersistableChannelUniqueQueue: %s Done Redirecting remaining data", q.delayedStarter.name)
@@ -0,0 +1,259 @@
// Copyright 2023 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package queue
import (
"fmt"
"strconv"
"sync"
"testing"
"time"
"code.gitea.io/gitea/modules/log"
"github.com/stretchr/testify/assert"
)
func TestPersistableChannelUniqueQueue(t *testing.T) {
tmpDir := t.TempDir()
fmt.Printf("TempDir %s\n", tmpDir)
_ = log.NewLogger(1000, "console", "console", `{"level":"warn","stacktracelevel":"NONE","stderr":true}`)
// Common function to create the Queue
newQueue := func(name string, handle func(data ...Data) []Data) Queue {
q, err := NewPersistableChannelUniqueQueue(handle,
PersistableChannelUniqueQueueConfiguration{
Name: name,
DataDir: tmpDir,
QueueLength: 200,
MaxWorkers: 1,
BlockTimeout: 1 * time.Second,
BoostTimeout: 5 * time.Minute,
BoostWorkers: 1,
Workers: 0,
}, "task-0")
assert.NoError(t, err)
return q
}
// runs the provided queue and provides some timer function
type channels struct {
readyForShutdown chan struct{} // closed when shutdown functions have been assigned
readyForTerminate chan struct{} // closed when terminate functions have been assigned
signalShutdown chan struct{} // Should close to signal shutdown
doneShutdown chan struct{} // closed when shutdown function is done
queueTerminate []func() // list of atTerminate functions to call atTerminate - need to be accessed with lock
}
runQueue := func(q Queue, lock *sync.Mutex) *channels {
chans := &channels{
readyForShutdown: make(chan struct{}),
readyForTerminate: make(chan struct{}),
signalShutdown: make(chan struct{}),
doneShutdown: make(chan struct{}),
}
go q.Run(func(atShutdown func()) {
go func() {
lock.Lock()
select {
case <-chans.readyForShutdown:
default:
close(chans.readyForShutdown)
}
lock.Unlock()
<-chans.signalShutdown
atShutdown()
close(chans.doneShutdown)
}()
}, func(atTerminate func()) {
lock.Lock()
defer lock.Unlock()
select {
case <-chans.readyForTerminate:
default:
close(chans.readyForTerminate)
}
chans.queueTerminate = append(chans.queueTerminate, atTerminate)
})
return chans
}
// call to shutdown and terminate the queue associated with the channels
doTerminate := func(chans *channels, lock *sync.Mutex) {
<-chans.readyForTerminate
lock.Lock()
callbacks := []func(){}
callbacks = append(callbacks, chans.queueTerminate...)
lock.Unlock()
for _, callback := range callbacks {
callback()
}
}
mapLock := sync.Mutex{}
executedInitial := map[string][]string{}
hasInitial := map[string][]string{}
fillQueue := func(name string, done chan struct{}) {
t.Run("Initial Filling: "+name, func(t *testing.T) {
lock := sync.Mutex{}
startAt100Queued := make(chan struct{})
stopAt20Shutdown := make(chan struct{}) // stop and shutdown at the 20th item
handle := func(data ...Data) []Data {
<-startAt100Queued
for _, datum := range data {
s := datum.(string)
mapLock.Lock()
executedInitial[name] = append(executedInitial[name], s)
mapLock.Unlock()
if s == "task-20" {
close(stopAt20Shutdown)
}
}
return nil
}
q := newQueue(name, handle)
// add 100 tasks to the queue
for i := 0; i < 100; i++ {
_ = q.Push("task-" + strconv.Itoa(i))
}
close(startAt100Queued)
chans := runQueue(q, &lock)
<-chans.readyForShutdown
<-stopAt20Shutdown
close(chans.signalShutdown)
<-chans.doneShutdown
_ = q.Push("final")
// check which tasks are still in the queue
for i := 0; i < 100; i++ {
if has, _ := q.(UniqueQueue).Has("task-" + strconv.Itoa(i)); has {
mapLock.Lock()
hasInitial[name] = append(hasInitial[name], "task-"+strconv.Itoa(i))
mapLock.Unlock()
}
}
if has, _ := q.(UniqueQueue).Has("final"); has {
mapLock.Lock()
hasInitial[name] = append(hasInitial[name], "final")
mapLock.Unlock()
} else {
assert.Fail(t, "UnqueQueue %s should have \"final\"", name)
}
doTerminate(chans, &lock)
mapLock.Lock()
assert.Equal(t, 101, len(executedInitial[name])+len(hasInitial[name]))
mapLock.Unlock()
})
close(done)
}
doneA := make(chan struct{})
doneB := make(chan struct{})
go fillQueue("QueueA", doneA)
go fillQueue("QueueB", doneB)
<-doneA
<-doneB
executedEmpty := map[string][]string{}
hasEmpty := map[string][]string{}
emptyQueue := func(name string, done chan struct{}) {
t.Run("Empty Queue: "+name, func(t *testing.T) {
lock := sync.Mutex{}
stop := make(chan struct{})
// collect the tasks that have been executed
handle := func(data ...Data) []Data {
lock.Lock()
for _, datum := range data {
mapLock.Lock()
executedEmpty[name] = append(executedEmpty[name], datum.(string))
mapLock.Unlock()
if datum.(string) == "final" {
close(stop)
}
}
lock.Unlock()
return nil
}
q := newQueue(name, handle)
chans := runQueue(q, &lock)
<-chans.readyForShutdown
<-stop
close(chans.signalShutdown)
<-chans.doneShutdown
// check which tasks are still in the queue
for i := 0; i < 100; i++ {
if has, _ := q.(UniqueQueue).Has("task-" + strconv.Itoa(i)); has {
mapLock.Lock()
hasEmpty[name] = append(hasEmpty[name], "task-"+strconv.Itoa(i))
mapLock.Unlock()
}
}
doTerminate(chans, &lock)
mapLock.Lock()
assert.Equal(t, 101, len(executedInitial[name])+len(executedEmpty[name]))
assert.Equal(t, 0, len(hasEmpty[name]))
mapLock.Unlock()
})
close(done)
}
doneA = make(chan struct{})
doneB = make(chan struct{})
go emptyQueue("QueueA", doneA)
go emptyQueue("QueueB", doneB)
<-doneA
<-doneB
mapLock.Lock()
t.Logf("TestPersistableChannelUniqueQueue executedInitiallyA=%v, executedInitiallyB=%v, executedToEmptyA=%v, executedToEmptyB=%v",
len(executedInitial["QueueA"]), len(executedInitial["QueueB"]), len(executedEmpty["QueueA"]), len(executedEmpty["QueueB"]))
// reset and rerun
executedInitial = map[string][]string{}
hasInitial = map[string][]string{}
executedEmpty = map[string][]string{}
hasEmpty = map[string][]string{}
mapLock.Unlock()
doneA = make(chan struct{})
doneB = make(chan struct{})
go fillQueue("QueueA", doneA)
go fillQueue("QueueB", doneB)
<-doneA
<-doneB
doneA = make(chan struct{})
doneB = make(chan struct{})
go emptyQueue("QueueA", doneA)
go emptyQueue("QueueB", doneB)
<-doneA
<-doneB
mapLock.Lock()
t.Logf("TestPersistableChannelUniqueQueue executedInitiallyA=%v, executedInitiallyB=%v, executedToEmptyA=%v, executedToEmptyB=%v",
len(executedInitial["QueueA"]), len(executedInitial["QueueB"]), len(executedEmpty["QueueA"]), len(executedEmpty["QueueB"]))
mapLock.Unlock()
}
+1 -1
View File
@@ -319,7 +319,7 @@ func initRepoCommit(ctx context.Context, tmpPath string, repo *repo_model.Reposi
cmd := git.NewCommand(ctx,
"commit", git.CmdArg(fmt.Sprintf("--author='%s <%s>'", sig.Name, sig.Email)),
"-m", "Initial commit",
"--message=Initial commit",
)
sign, keyID, signer, _ := asymkey_service.SignInitialCommit(ctx, tmpPath, u)
+10 -2
View File
@@ -21,6 +21,7 @@ import (
"text/template"
"time"
"code.gitea.io/gitea/modules/auth/password/hash"
"code.gitea.io/gitea/modules/container"
"code.gitea.io/gitea/modules/generate"
"code.gitea.io/gitea/modules/json"
@@ -944,7 +945,7 @@ func loadFromConf(allowEmpty bool, extraConfig string) {
if SecretKey == "" {
// FIXME: https://github.com/go-gitea/gitea/issues/16832
// Until it supports rotating an existing secret key, we shouldn't move users off of the widely used default value
SecretKey = "!#@FDEWREWR&*(" // nolint:gosec
SecretKey = "!#@FDEWREWR&*(" //nolint:gosec
}
CookieRememberName = sec.Key("COOKIE_REMEMBER_NAME").MustString("gitea_incredible")
@@ -964,7 +965,14 @@ func loadFromConf(allowEmpty bool, extraConfig string) {
DisableGitHooks = sec.Key("DISABLE_GIT_HOOKS").MustBool(true)
DisableWebhooks = sec.Key("DISABLE_WEBHOOKS").MustBool(false)
OnlyAllowPushIfGiteaEnvironmentSet = sec.Key("ONLY_ALLOW_PUSH_IF_GITEA_ENVIRONMENT_SET").MustBool(true)
PasswordHashAlgo = sec.Key("PASSWORD_HASH_ALGO").MustString("pbkdf2")
// Ensure that the provided default hash algorithm is a valid hash algorithm
var algorithm *hash.PasswordHashAlgorithm
PasswordHashAlgo, algorithm = hash.SetDefaultPasswordHashAlgorithm(sec.Key("PASSWORD_HASH_ALGO").MustString(""))
if algorithm == nil {
log.Fatal("The provided password hash algorithm was invalid: %s", sec.Key("PASSWORD_HASH_ALGO").MustString(""))
}
CSRFCookieHTTPOnly = sec.Key("CSRF_COOKIE_HTTP_ONLY").MustBool(true)
PasswordCheckPwn = sec.Key("PASSWORD_CHECK_PWN").MustBool(false)
SuccessfulTokensCacheSize = sec.Key("SUCCESSFUL_TOKENS_CACHE_SIZE").MustInt(20)
+22
View File
@@ -5,6 +5,7 @@
package util
import (
"errors"
"io"
)
@@ -18,3 +19,24 @@ func ReadAtMost(r io.Reader, buf []byte) (n int, err error) {
}
return n, err
}
// ErrNotEmpty is an error reported when there is a non-empty reader
var ErrNotEmpty = errors.New("not-empty")
// IsEmptyReader reads a reader and ensures it is empty
func IsEmptyReader(r io.Reader) (err error) {
var buf [1]byte
for {
n, err := r.Read(buf[:])
if err != nil {
if err == io.EOF {
return nil
}
return err
}
if n > 0 {
return ErrNotEmpty
}
}
}
+30 -26
View File
@@ -23,19 +23,23 @@ import (
"code.gitea.io/gitea/modules/log"
packages_module "code.gitea.io/gitea/modules/packages"
container_module "code.gitea.io/gitea/modules/packages/container"
"code.gitea.io/gitea/modules/packages/container/oci"
"code.gitea.io/gitea/modules/setting"
"code.gitea.io/gitea/modules/util"
"code.gitea.io/gitea/routers/api/packages/helper"
packages_service "code.gitea.io/gitea/services/packages"
container_service "code.gitea.io/gitea/services/packages/container"
digest "github.com/opencontainers/go-digest"
)
// maximum size of a container manifest
// https://github.com/opencontainers/distribution-spec/blob/main/spec.md#pushing-manifests
const maxManifestSize = 10 * 1024 * 1024
var imageNamePattern = regexp.MustCompile(`\A[a-z0-9]+([._-][a-z0-9]+)*(/[a-z0-9]+([._-][a-z0-9]+)*)*\z`)
var (
imageNamePattern = regexp.MustCompile(`\A[a-z0-9]+([._-][a-z0-9]+)*(/[a-z0-9]+([._-][a-z0-9]+)*)*\z`)
referencePattern = regexp.MustCompile(`\A[a-zA-Z0-9_][a-zA-Z0-9._-]{0,127}\z`)
)
type containerHeaders struct {
Status int
@@ -407,16 +411,16 @@ func CancelUploadBlob(ctx *context.Context) {
}
func getBlobFromContext(ctx *context.Context) (*packages_model.PackageFileDescriptor, error) {
digest := ctx.Params("digest")
d := ctx.Params("digest")
if !oci.Digest(digest).Validate() {
if digest.Digest(d).Validate() != nil {
return nil, container_model.ErrContainerBlobNotExist
}
return workaroundGetContainerBlob(ctx, &container_model.BlobSearchOptions{
OwnerID: ctx.Package.Owner.ID,
Image: ctx.Params("image"),
Digest: digest,
Digest: d,
})
}
@@ -471,14 +475,14 @@ func GetBlob(ctx *context.Context) {
// https://github.com/opencontainers/distribution-spec/blob/main/spec.md#deleting-blobs
func DeleteBlob(ctx *context.Context) {
digest := ctx.Params("digest")
d := ctx.Params("digest")
if !oci.Digest(digest).Validate() {
if digest.Digest(d).Validate() != nil {
apiErrorDefined(ctx, errBlobUnknown)
return
}
if err := deleteBlob(ctx.Package.Owner.ID, ctx.Params("image"), digest); err != nil {
if err := deleteBlob(ctx.Package.Owner.ID, ctx.Params("image"), d); err != nil {
apiError(ctx, http.StatusInternalServerError, err)
return
}
@@ -493,15 +497,15 @@ func UploadManifest(ctx *context.Context) {
reference := ctx.Params("reference")
mci := &manifestCreationInfo{
MediaType: oci.MediaType(ctx.Req.Header.Get("Content-Type")),
MediaType: ctx.Req.Header.Get("Content-Type"),
Owner: ctx.Package.Owner,
Creator: ctx.Doer,
Image: ctx.Params("image"),
Reference: reference,
IsTagged: !oci.Digest(reference).Validate(),
IsTagged: digest.Digest(reference).Validate() != nil,
}
if mci.IsTagged && !oci.Reference(reference).Validate() {
if mci.IsTagged && !referencePattern.MatchString(reference) {
apiErrorDefined(ctx, errManifestInvalid.WithMessage("Tag is invalid"))
return
}
@@ -539,7 +543,7 @@ func UploadManifest(ctx *context.Context) {
})
}
func getManifestFromContext(ctx *context.Context) (*packages_model.PackageFileDescriptor, error) {
func getBlobSearchOptionsFromContext(ctx *context.Context) (*container_model.BlobSearchOptions, error) {
reference := ctx.Params("reference")
opts := &container_model.BlobSearchOptions{
@@ -547,14 +551,24 @@ func getManifestFromContext(ctx *context.Context) (*packages_model.PackageFileDe
Image: ctx.Params("image"),
IsManifest: true,
}
if oci.Digest(reference).Validate() {
if digest.Digest(reference).Validate() == nil {
opts.Digest = reference
} else if oci.Reference(reference).Validate() {
} else if referencePattern.MatchString(reference) {
opts.Tag = reference
} else {
return nil, container_model.ErrContainerBlobNotExist
}
return opts, nil
}
func getManifestFromContext(ctx *context.Context) (*packages_model.PackageFileDescriptor, error) {
opts, err := getBlobSearchOptionsFromContext(ctx)
if err != nil {
return nil, err
}
return workaroundGetContainerBlob(ctx, opts)
}
@@ -611,18 +625,8 @@ func GetManifest(ctx *context.Context) {
// https://github.com/opencontainers/distribution-spec/blob/main/spec.md#deleting-tags
// https://github.com/opencontainers/distribution-spec/blob/main/spec.md#deleting-manifests
func DeleteManifest(ctx *context.Context) {
reference := ctx.Params("reference")
opts := &container_model.BlobSearchOptions{
OwnerID: ctx.Package.Owner.ID,
Image: ctx.Params("image"),
IsManifest: true,
}
if oci.Digest(reference).Validate() {
opts.Digest = reference
} else if oci.Reference(reference).Validate() {
opts.Tag = reference
} else {
opts, err := getBlobSearchOptionsFromContext(ctx)
if err != nil {
apiErrorDefined(ctx, errManifestUnknown)
return
}
+51 -19
View File
@@ -18,16 +18,31 @@ import (
user_model "code.gitea.io/gitea/models/user"
"code.gitea.io/gitea/modules/json"
"code.gitea.io/gitea/modules/log"
"code.gitea.io/gitea/modules/notification"
packages_module "code.gitea.io/gitea/modules/packages"
container_module "code.gitea.io/gitea/modules/packages/container"
"code.gitea.io/gitea/modules/packages/container/oci"
"code.gitea.io/gitea/modules/util"
packages_service "code.gitea.io/gitea/services/packages"
digest "github.com/opencontainers/go-digest"
oci "github.com/opencontainers/image-spec/specs-go/v1"
)
func isValidMediaType(mt string) bool {
return strings.HasPrefix(mt, "application/vnd.docker.") || strings.HasPrefix(mt, "application/vnd.oci.")
}
func isImageManifestMediaType(mt string) bool {
return strings.EqualFold(mt, oci.MediaTypeImageManifest) || strings.EqualFold(mt, "application/vnd.docker.distribution.manifest.v2+json")
}
func isImageIndexMediaType(mt string) bool {
return strings.EqualFold(mt, oci.MediaTypeImageIndex) || strings.EqualFold(mt, "application/vnd.docker.distribution.manifest.list.v2+json")
}
// manifestCreationInfo describes a manifest to create
type manifestCreationInfo struct {
MediaType oci.MediaType
MediaType string
Owner *user_model.User
Creator *user_model.User
Image string
@@ -37,12 +52,12 @@ type manifestCreationInfo struct {
}
func processManifest(mci *manifestCreationInfo, buf *packages_module.HashedBuffer) (string, error) {
var schema oci.SchemaMediaBase
if err := json.NewDecoder(buf).Decode(&schema); err != nil {
var index oci.Index
if err := json.NewDecoder(buf).Decode(&index); err != nil {
return "", err
}
if schema.SchemaVersion != 2 {
if index.SchemaVersion != 2 {
return "", errUnsupported.WithMessage("Schema version is not supported")
}
@@ -50,19 +65,17 @@ func processManifest(mci *manifestCreationInfo, buf *packages_module.HashedBuffe
return "", err
}
if !mci.MediaType.IsValid() {
mci.MediaType = schema.MediaType
if !mci.MediaType.IsValid() {
if !isValidMediaType(mci.MediaType) {
mci.MediaType = index.MediaType
if !isValidMediaType(mci.MediaType) {
return "", errManifestInvalid.WithMessage("MediaType not recognized")
}
}
if mci.MediaType.IsImageManifest() {
d, err := processImageManifest(mci, buf)
return d, err
} else if mci.MediaType.IsImageIndex() {
d, err := processImageManifestIndex(mci, buf)
return d, err
if isImageManifestMediaType(mci.MediaType) {
return processImageManifest(mci, buf)
} else if isImageIndexMediaType(mci.MediaType) {
return processImageManifestIndex(mci, buf)
}
return "", errManifestInvalid
}
@@ -169,6 +182,10 @@ func processImageManifest(mci *manifestCreationInfo, buf *packages_module.Hashed
return err
}
if err := notifyPackageCreate(mci.Creator, pv); err != nil {
return err
}
manifestDigest = digest
return nil
@@ -205,7 +222,7 @@ func processImageManifestIndex(mci *manifestCreationInfo, buf *packages_module.H
}
for _, manifest := range index.Manifests {
if !manifest.MediaType.IsImageManifest() {
if !isImageManifestMediaType(manifest.MediaType) {
return errManifestInvalid
}
@@ -258,6 +275,10 @@ func processImageManifestIndex(mci *manifestCreationInfo, buf *packages_module.H
return err
}
if err := notifyPackageCreate(mci.Creator, pv); err != nil {
return err
}
manifestDigest = digest
return nil
@@ -269,6 +290,17 @@ func processImageManifestIndex(mci *manifestCreationInfo, buf *packages_module.H
return manifestDigest, nil
}
func notifyPackageCreate(doer *user_model.User, pv *packages_model.PackageVersion) error {
pd, err := packages_model.GetPackageDescriptor(db.DefaultContext, pv)
if err != nil {
return err
}
notification.NotifyPackageCreate(doer, pd)
return nil
}
func createPackageAndVersion(ctx context.Context, mci *manifestCreationInfo, metadata *container_module.Metadata) (*packages_model.PackageVersion, error) {
created := true
p := &packages_model.Package{
@@ -345,8 +377,8 @@ func createPackageAndVersion(ctx context.Context, mci *manifestCreationInfo, met
}
type blobReference struct {
Digest oci.Digest
MediaType oci.MediaType
Digest digest.Digest
MediaType string
Name string
File *packages_model.PackageFileDescriptor
ExpectedSize int64
@@ -380,7 +412,7 @@ func createFileFromBlobReference(ctx context.Context, pv, uploadVersion *package
}
props := map[string]string{
container_module.PropertyMediaType: string(ref.MediaType),
container_module.PropertyMediaType: ref.MediaType,
container_module.PropertyDigest: string(ref.Digest),
}
for name, value := range props {
@@ -425,7 +457,7 @@ func createManifestBlob(ctx context.Context, mci *manifestCreationInfo, pv *pack
manifestDigest := digestFromHashSummer(buf)
err = createFileFromBlobReference(ctx, pv, nil, &blobReference{
Digest: oci.Digest(manifestDigest),
Digest: digest.Digest(manifestDigest),
MediaType: mci.MediaType,
Name: container_model.ManifestFilename,
File: &packages_model.PackageFileDescriptor{Blob: pb},
+4 -2
View File
@@ -22,8 +22,10 @@ import (
)
// https://peps.python.org/pep-0426/#name
var normalizer = strings.NewReplacer(".", "-", "_", "-")
var nameMatcher = regexp.MustCompile(`\A(?:[a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\.\-_]*[a-zA-Z0-9])\z`)
var (
normalizer = strings.NewReplacer(".", "-", "_", "-")
nameMatcher = regexp.MustCompile(`\A(?:[a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\.\-_]*[a-zA-Z0-9])\z`)
)
// https://peps.python.org/pep-0440/#appendix-b-parsing-version-strings-with-regular-expressions
var versionMatcher = regexp.MustCompile(`\Av?` +
+2
View File
@@ -33,6 +33,8 @@ func CreateRepo(ctx *context.APIContext) {
// responses:
// "201":
// "$ref": "#/responses/Repository"
// "400":
// "$ref": "#/responses/error"
// "403":
// "$ref": "#/responses/forbidden"
// "404":
+1 -1
View File
@@ -16,10 +16,10 @@ import (
"code.gitea.io/gitea/models/auth"
"code.gitea.io/gitea/models/db"
user_model "code.gitea.io/gitea/models/user"
"code.gitea.io/gitea/modules/auth/password"
"code.gitea.io/gitea/modules/context"
"code.gitea.io/gitea/modules/convert"
"code.gitea.io/gitea/modules/log"
"code.gitea.io/gitea/modules/password"
"code.gitea.io/gitea/modules/setting"
api "code.gitea.io/gitea/modules/structs"
"code.gitea.io/gitea/modules/util"
+1 -1
View File
@@ -155,7 +155,7 @@ func Migrate(ctx *context.APIContext) {
Issues: form.Issues,
Milestones: form.Milestones,
Labels: form.Labels,
Comments: true,
Comments: form.Issues || form.PullRequests,
PullRequests: form.PullRequests,
Releases: form.Releases,
GitServiceType: gitServiceType,

Some files were not shown because too many files have changed in this diff Show More