Compare commits

...

43 Commits

Author SHA1 Message Date
Zettat123
f913d90ab6 Update CHANGELOG for v1.25.5 (#36885)
Wait

- ~~#36888~~
2026-03-12 19:26:33 -06:00
Zettat123
156d9ffb23 Update toolchain to 1.25.8 for v1.25 (#36888)
> go1.25.8 (released 2026-03-05) includes security fixes to the
html/template, net/url, and os packages, as well as bug fixes to the go
command, the compiler, and the os package. See the [Go 1.25.8
milestone](https://github.com/golang/go/issues?q=milestone%3AGo1.25.8+label%3ACherryPickApproved)
on our issue tracker for details.
2026-03-11 17:37:33 -07:00
Lunny Xiao
96515c0f20 Fix org permission API visibility checks for hidden members and private orgs (#36798) (#36841)
backport #36798 

- fix wrong parameter of HasOrgOrUserVisible in
routers/api/v1/org/org.go
- add integration tests covering the bug fix
- merge permissions API tests

---
Generated by a coding agent with Codex 5.2
2026-03-08 16:26:08 +00:00
Lunny Xiao
4f562da975 Fix non-admins unable to automerge PRs from forks (#36833) (#36843)
backport #36833 

Make `handlePullRequestAutoMerge` correctly check the permissions of the
merging user against pr.BaseRepo.

Co-authored-by: Michael Hoang <10492681+Enzime@users.noreply.github.com>
Co-authored-by: Michael Hoang <enzime@users.noreply.github.com>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
2026-03-08 10:18:35 +00:00
Giteabot
76c539cd57 Fix bug to check whether user can update pull request branch or rebase branch (#36465) (#36838)
Backport #36465 by @lunny

When checking whether a user can update a pull request branch or perform
an update via rebase, a maintainer should inherit the pull request
author’s permissions if Allow maintainer edits is enabled.

Signed-off-by: Lunny Xiao <xiaolunwen@gmail.com>
Signed-off-by: wxiaoguang <wxiaoguang@gmail.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
2026-03-08 03:23:54 +00:00
Lunny Xiao
f7e3569fab Add a git grep search timeout (#36809) (#36835)
Backport #36809
2026-03-07 20:40:16 +00:00
Copilot
b3290b62fc Backport: Make security-check informational only (#36681) (#36852)
Backport #36681

`security-check` (govulncheck) was failing CI on all PRs whenever
vulnerabilities existed in dependencies. Since
https://github.com/go-gitea/gitea/security/dependabot already surfaces
this information, the check should be informational only.

- **`Makefile`**: Append `|| true` to the `security-check` target so
govulncheck output is preserved but non-zero exits no longer break CI.

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: silverwind <115237+silverwind@users.noreply.github.com>
2026-03-06 22:53:59 +00:00
Giteabot
f7ac507671 Fix dump release asset bug (#36799) (#36839)
Backport #36799 by @lunny

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: silverwind <me@silverwind.io>
2026-03-06 19:50:17 +00:00
Giteabot
e2517e0fa9 Fix forwarded proto handling for public URL detection (#36810) (#36836)
Backport #36810 by @lunny

- normalize `X-Forwarded-Proto`/related headers to accept only
`http`/`https`
- ignore malformed or injected scheme values to prevent spoofed
canonical URLs
- add tests covering malicious and multi-valued forwarded proto headers

---
Generated by a coding agent with Codex 5.2

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Co-authored-by: silverwind <me@silverwind.io>
2026-03-06 19:02:50 +00:00
Giteabot
413074b1e1 Fix OAuth2 authorization code expiry and reuse handling (#36797) (#36851)
Backport #36797 by @lunny

- set OAuth2 authorization code `ValidUntil` on creation and add expiry
checks during exchange
- return a specific error when codes are invalidated twice to prevent
concurrent reuse
- add unit tests covering validity timestamps, expiration, and double
invalidation

---
Generate by a coding agent with Codex 5.2

Signed-off-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-03-06 10:33:20 -08:00
Lunny Xiao
3c46a3deb3 Fix bug when pushing mirror with wiki (#36795) (#36807)
Fix #36736
Backport #36795

Co-authored-by: ChristopherHX <christopher.homberger@web.de>
2026-03-06 16:26:34 +01:00
Giteabot
5552eff6e7 Fix artifacts v4 backend upload problems (#36805) (#36834)
Backport #36805 by @ChristopherHX

* Use base64.RawURLEncoding to avoid equal sign
  * using the nodejs package they seem to get lost
* Support uploads with unspecified length
* Support uploads with a single named blockid
  * without requiring a blockmap

Signed-off-by: wxiaoguang <wxiaoguang@gmail.com>
Co-authored-by: ChristopherHX <christopher.homberger@web.de>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
2026-03-06 14:22:53 +01:00
Lunny Xiao
f44f7bf2d3 upgrade to github.com/cloudflare/circl 1.6.3, svgo 4.0.1, markdownlint-cli 0.48.0 (#36840)
Backport #36837

---------

Co-authored-by: Christopher Homberger <christopher.homberger@web.de>
2026-03-06 12:55:33 +01:00
Giteabot
0f55eff0e7 Fix CRAN package version validation to allow more than 4 version components (#36813) (#36821)
Backport #36813

Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com>
Co-authored-by: wxiaoguang <2114189+wxiaoguang@users.noreply.github.com>
2026-03-04 09:29:22 +08:00
Giteabot
b3bc79262d Add validation constraints for repository creation fields (#36671) (#36757)
Backport #36671 by @lunny

Adds validation constraints to repository creation inputs, enforcing
max-length limits for labels/license/readme and enum validation for
trust model and object format. Updates both the API option struct and
the web form struct to keep validation consistent.

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2026-02-25 20:43:00 +00:00
Lunny Xiao
d1bd84f8cf Fix force push time-line commit comments of pull request (#36653) (#36717)
Backport #36653 

Fix #36647
Fix #25827
Fix #25870

Signed-off-by: silverwind <me@silverwind.io>
Signed-off-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: silverwind <me@silverwind.io>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-02-25 11:54:30 -08:00
Giteabot
19e36e8a70 Fix SVG height calculation in diff viewer (#36748) (#36750)
Backport #36748 by POPSuL

Fixes #36742

Co-authored-by: Viktor Suprun <popsul1993@gmail.com>
2026-02-26 00:46:35 +08:00
Giteabot
00566cc953 Fix track time list permission check (#36662) (#36744)
Backport #36662 by @lunny

Signed-off-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
2026-02-25 07:57:47 -08:00
Giteabot
579615936c Fix path resolving (#36734) (#36746)
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
2026-02-25 12:11:53 +08:00
Giteabot
2aee44cdd9 Prevent redirect bypasses via backslash-encoded paths (#36660) (#36716)
Backport #36660 by @lunny

This change tightens relative URL validation to reject raw backslashes
and `%5c` (encoded backslash), since browsers and URL normalizers can
treat backslashes as path separators. That normalization can turn
seemingly relative paths into scheme-relative URLs, creating
open-redirect risk.

Visiting below URL to reproduce the problem.

http://localhost:3000/user/login?redirect_to=/a/../\example.com

http://localhost:3000/user/login?redirect_to=/a/../%5cexample.com

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: silverwind <me@silverwind.io>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
2026-02-23 01:59:59 +01:00
Giteabot
e7fca90a78 Fix get release draft permission check (#36659) (#36715)
Backport #36659 by @lunny

Draft release and it's attachments need a write permission to access.

Signed-off-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-02-22 22:09:07 +00:00
Lunny Xiao
3422318545 Fix push time bug (#36693) (#36713)
When display or search branch's pushed time, we should use
`updated_unix` rather than `commit_time`.

Fix #36633
Backport #36693

Signed-off-by: silverwind <me@silverwind.io>
Co-authored-by: silverwind <me@silverwind.io>
2026-02-22 22:27:40 +01:00
Giteabot
996cc12bf7 Add migration http transport for push/sync mirror lfs (#36665) (#36691)
Backport #36665 by @lunny

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2026-02-22 08:56:14 +00:00
Giteabot
99bc281856 Add some validation on values provided to USER_DISABLED_FEATURES and EXTERNAL_USER_DISABLED_FEATURES (#36688) (#36692) 2026-02-21 11:13:15 -05:00
Giteabot
8051056075 Fix track time issue id (#36664) (#36689)
Backport #36664 by @lunny

---------

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2026-02-21 00:26:56 +00:00
Lunny Xiao
0b2f7575e7 Upgrade gogit to 5.16.5 (#36687)
Backport #36680
2026-02-20 15:02:38 -08:00
Giteabot
216cf96cd4 Fix bug the protected branch rule name is conflicted with renamed branch name (#36650) (#36661)
Backport #36650 by @lunny

Fix #36464

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2026-02-17 21:57:43 +00:00
Lunny Xiao
e927a86586 Fix a bug user could change another user's primary email (#36586) (#36607)
backport #36586
2026-02-14 14:06:59 +02:00
Giteabot
76b7306daa Fix bug when do LFS GC (#36500) (#36608)
Backport #36500 by @lunny

Fix #36448

Removed unnecessary parameters from the LFS GC process and switched to
an ORDER BY id ASC strategy with a last-ID cursor to avoid missing or
duplicating meta object IDs.

Signed-off-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-02-13 04:30:42 +00:00
Tyrone Yeh
8e412ababf Fix focus lost bugs in the Monaco editor (#36609)
…t focus (#36585)

Currently, pressing the space key in the Monaco editor scrolls the page
instead of inserting a space
if the editor is focused. This PR stops the space key event from
propagating to parent elements,
which prevents unwanted page scrolling while still allowing Monaco to
handle space input normally.

Changes:
 - disable Monaco editContext

No changes to default editor behavior are needed; Monaco automatically
inserts the space character.

Signed-off-by: silverwind <me@silverwind.io>
Co-authored-by: silverwind <me@silverwind.io>
2026-02-13 05:00:17 +01:00
Tyrone Yeh
4f1408cdcf fix(diff): reprocess htmx content after loading more files (#36568) (#36577) 2026-02-10 13:10:32 +08:00
Giteabot
5973437abb Add wrap to runner label list (#36565) (#36574)
Backport #36565 by @silverwind

Before: Label list forces runner table to become scrollable if there is
a large number of labels:

<img width="820" height="115" alt="Screenshot 2026-02-09 at 09 21 32"
src="https://github.com/user-attachments/assets/919a3b12-c8f6-48c4-bd42-d7e267faf107"
/>

After: Wrapped:

<img width="821" height="128" alt="Screenshot 2026-02-09 at 09 20 31"
src="https://github.com/user-attachments/assets/9f6d490c-1035-44be-97a7-20a1632dbe5e"
/>

Co-authored-by: silverwind <me@silverwind.io>
2026-02-10 04:47:12 +00:00
Giteabot
90843398ed fix: add dnf5 command for Fedora in RPM package instructions (#36527) (#36572)
Backport #36527 by @yshyuk

## Summary
Add support for Fedora 41+ which uses dnf5 with different command syntax
for adding repositories.

## Changes
- Added new locale key `packages.rpm.distros.fedora` for Fedora (dnf5)
- Added dnf5 command in RPM package template: `dnf config-manager
addrepo --from-repofile=<URL>`
- Kept existing dnf4 command (`--add-repo`) for RHEL/Rocky Linux
compatibility

## Background
Fedora 41+ uses dnf5 which has different syntax:
- **dnf4 (RHEL/Rocky):** `dnf config-manager --add-repo <URL>`
- **dnf5 (Fedora 41+):** `dnf config-manager addrepo
--from-repofile=<URL>`

Closes #35330

Co-authored-by: yshyuk <43194469+yshyuk@users.noreply.github.com>
2026-02-10 02:16:39 +01:00
Giteabot
9b3a9527ec Fix assignee sidebar links and empty placeholder (#36559) (#36563)
Backport #36559 by tyroneyeh

Co-authored-by: Tyrone Yeh <siryeh@gmail.com>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
2026-02-09 03:31:38 +00:00
Giteabot
7477f85e47 Fix issues filter dropdown showing empty label scope section (#36535) (#36544)
Backport #36535 by tyroneyeh
2026-02-08 15:59:16 +00:00
wxiaoguang
4098032aa8 Fix various mermaid bugs (#36547) (#36552)
Backport #36547
2026-02-08 19:24:35 +08:00
Giteabot
dcce96c08d [SECURITY] fix: Adjust the toolchain version (#36537) (#36542)
Backport #36537 by @ZPascal

# Summary:

- Adjust the toolchain version to fix the security issues


```log
Vulnerability #1: GO-2026-4337
    Unexpected session resumption in crypto/tls
  More info: https://pkg.go.dev/vuln/GO-2026-4337
  Standard library
    Found in: crypto/tls@go1.25.6
    Fixed in: crypto/tls@go1.25.7
    Example traces found:
```

Signed-off-by: Pascal Zimmermann <pascal.zimmermann@theiotstudio.com>
Co-authored-by: Pascal Zimmermann <pascal.zimmermann@theiotstudio.com>
2026-02-06 23:00:52 +08:00
Giteabot
885f2b89d6 fix(packages/container): data race when uploading container blobs concurrently (#36524) (#36526)
Backport #36524 by @noeljackson

Fix data race when uploading container blobs concurrently

Co-authored-by: Noel Jackson <n@noeljackson.com>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
2026-02-04 09:32:26 -08:00
Giteabot
57ce10c0ca Allow scroll propagation outside code editor (#36502) (#36510)
Backport #36502 by @lunny

Fix #28479

When scrolling inside the editor and the editor has already reached the
end of its scroll area, the browser does not continue scrolling. This is
inconvenient because users must move the cursor out of the editor to
scroll the page further.

This PR enables automatic switching between the editor’s scroll and the
browser’s scroll, allowing seamless continuous scrolling.

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2026-02-01 09:33:23 -08:00
Sebastian Ertz
25785041e7 Correct spacing between username and bot label (#36473) (#36484)
Backport #36473

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2026-01-30 05:47:46 +00:00
Giteabot
ff3d11034d [SECURITY] Toolchain Update to Go 1.25.6 (#36480) (#36487)
Backport #36480 by @ZPascal

## Overview
This PR updates the Go toolchain version from `1.25.5` to `1.25.6` for
the Gitea project.

## Changes

### Toolchain Update
- **Go Toolchain**: Updated from `go1.25.5` to `go1.25.6`

This is a minor toolchain version bump that ensures the project uses the
latest patch release of Go 1.25.

## Security Improvements

While this PR primarily addresses the toolchain update, the project
maintains a strong security posture through:

### Current Security Measures
```log
Vulnerability #1: GO-2026-4342                                                                                                                                                                                                      
    Excessive CPU consumption when building archive index in archive/zip
  More info: https://pkg.go.dev/vuln/GO-2026-4342
  Standard library
    Found in: archive/zip@go1.25.5
    Fixed in: archive/zip@go1.25.6
    Example traces found:
      #1: modules/packages/nuget/metadata.go:217:25: nuget.ParseNuspecMetaData calls zip.Reader.Open                                                                                                                                

Vulnerability #2: GO-2026-4341
    Memory exhaustion in query parameter parsing in net/url
  More info: https://pkg.go.dev/vuln/GO-2026-4341
  Standard library
    Found in: net/url@go1.25.5
    Fixed in: net/url@go1.25.6
    Example traces found:
      #1: modules/storage/minio.go:284:34: storage.MinioStorage.URL calls url.ParseQuery                                                                                                                                            
      #2: routers/api/v1/repo/action.go:1640:29: repo.DownloadArtifactRaw calls url.URL.Query

Vulnerability #3: GO-2026-4340
    Handshake messages may be processed at the incorrect encryption level in
    crypto/tls
  More info: https://pkg.go.dev/vuln/GO-2026-4340
  Standard library
    Found in: crypto/tls@go1.25.5
    Fixed in: crypto/tls@go1.25.6
    Example traces found:
      #1: services/auth/source/ldap/source_search.go:129:25: ldap.dial calls ldap.Conn.StartTLS, which calls tls.Conn.Handshake                                                                                                     
      #2: modules/graceful/server.go:156:14: graceful.Server.Serve calls http.Server.Serve, which eventually calls tls.Conn.HandshakeContext
      #3: modules/lfs/content_store.go:132:27: lfs.hashingReader.Read calls tls.Conn.Read
      #4: modules/proxyprotocol/conn.go:91:21: proxyprotocol.Conn.Write calls tls.Conn.Write
      #5: modules/session/virtual.go:168:39: session.VirtualStore.Release calls couchbase.CouchbaseProvider.Exist, which eventually calls tls.Dial
      #6: services/auth/source/ldap/source_search.go:120:22: ldap.dial calls ldap.DialTLS, which calls tls.DialWithDialer
      #7: services/migrations/gogs.go:114:34: migrations.client calls http.Transport.RoundTrip, which eventually calls tls.Dialer.DialContext
```

Co-authored-by: Pascal Zimmermann <pascal.zimmermann@theiotstudio.com>
2026-01-29 21:18:21 -08:00
Giteabot
750649c1ef Fix oauth2 s256 (#36462) (#36477)
Backport #36462 by @lunny

---------

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2026-01-28 12:37:39 -08:00
Lunny Xiao
eb95bbc1fd Add missing changelog for v1.25.4 (#36433) 2026-01-23 06:35:34 +01:00
90 changed files with 2490 additions and 760 deletions

View File

@@ -4,6 +4,53 @@ This changelog goes through the changes that have been made in each release
without substantial changes to our git log; to see the highlights of what has
been added to each release, please refer to the [blog](https://blog.gitea.com).
## [1.25.5](https://github.com/go-gitea/gitea/releases/tag/1.25.5) - 2026-03-10
* SECURITY
* Toolchain Update to Go 1.25.6 (#36480) (#36487)
* Adjust the toolchain version (#36537) (#36542)
* Update toolchain to 1.25.8 for v1.25 (#36888)
* Prevent redirect bypasses via backslash-encoded paths (#36660) (#36716)
* Fix get release draft permission check (#36659) (#36715)
* Fix a bug user could change another user's primary email (#36586) (#36607)
* Fix OAuth2 authorization code expiry and reuse handling (#36797) (#36851)
* Add validation constraints for repository creation fields (#36671) (#36757)
* Fix bug to check whether user can update pull request branch or rebase branch (#36465) (#36838)
* Add migration http transport for push/sync mirror lfs (#36665) (#36691)
* Fix track time list permission check (#36662) (#36744)
* Fix track time issue id (#36664) (#36689)
* Fix path resolving (#36734) (#36746)
* Fix dump release asset bug (#36799) (#36839)
* Fix org permission API visibility checks for hidden members and private orgs (#36798) (#36841)
* Fix forwarded proto handling for public URL detection (#36810) (#36836)
* Add a git grep search timeout (#36809) (#36835)
* Fix oauth2 s256 (#36462) (#36477)
* ENHANCEMENTS
* Make `security-check` informational only (#36681) (#36852)
* Upgrade to github.com/cloudflare/circl 1.6.3, svgo 4.0.1, markdownlint-cli 0.48.0 (#36840)
* Add some validation on values provided to USER_DISABLED_FEATURES and EXTERNAL_USER_DISABLED_FEATURES (#36688) (#36692)
* Upgrade gogit to 5.16.5 (#36687)
* Add wrap to runner label list (#36565) (#36574)
* Add dnf5 command for Fedora in RPM package instructions (#36527) (#36572)
* Allow scroll propagation outside code editor (#36502) (#36510)
* BUGFIXES
* Fix non-admins unable to automerge PRs from forks (#36833) (#36843)
* Fix bug when pushing mirror with wiki (#36795) (#36807)
* Fix artifacts v4 backend upload problems (#36805) (#36834)
* Fix CRAN package version validation to allow more than 4 version components (#36813) (#36821)
* Fix force push time-line commit comments of pull request (#36653) (#36717)
* Fix SVG height calculation in diff viewer (#36748) (#36750)
* Fix push time bug (#36693) (#36713)
* Fix bug the protected branch rule name is conflicted with renamed branch name (#36650) (#36661)
* Fix bug when do LFS GC (#36500) (#36608)
* Fix focus lost bugs in the Monaco editor (#36609)
* Reprocess htmx content after loading more files (#36568) (#36577)
* Fix assignee sidebar links and empty placeholder (#36559) (#36563)
* Fix issues filter dropdown showing empty label scope section (#36535) (#36544)
* Fix various mermaid bugs (#36547) (#36552)
* Fix data race when uploading container blobs concurrently (#36524) (#36526)
* Correct spacing between username and bot label (#36473) (#36484)
## [1.25.4](https://github.com/go-gitea/gitea/releases/tag/1.25.4) - 2026-01-15
* SECURITY
@@ -20,6 +67,7 @@ been added to each release, please refer to the [blog](https://blog.gitea.com).
* Add more routes to the "expensive" list (#36290)
* Make "commit statuses" API accept slashes in "ref" (#36264) (#36275)
* BUGFIXES
* Fix git http service handling #36396
* Fix markdown newline handling during IME composition (#36421) #36424
* Fix missing repository id when migrating release attachments (#36389)
* Fix bug when compare in the pull request (#36363) (#36372)

View File

@@ -166,19 +166,19 @@ Here's how to run the test suite:
- code lint
| | |
| :-------------------- | :---------------------------------------------------------------- |
|``make lint`` | lint everything (not needed if you only change the front- **or** backend) |
|``make lint-frontend`` | lint frontend files |
|``make lint-backend`` | lint backend files |
| | |
| :-------------------- | :------------------------------------------------------------------------ |
|``make lint`` | lint everything (not needed if you only change the front- **or** backend) |
|``make lint-frontend`` | lint frontend files |
|``make lint-backend`` | lint backend files |
- run tests (we suggest running them on Linux)
| Command | Action | |
| :------------------------------------- | :----------------------------------------------- | ------------ |
|``make test[\#SpecificTestName]`` | run unit test(s) | |
|``make test-sqlite[\#SpecificTestName]``| run [integration](tests/integration) test(s) for SQLite |[More details](tests/integration/README.md) |
|``make test-e2e-sqlite[\#SpecificTestName]``| run [end-to-end](tests/e2e) test(s) for SQLite |[More details](tests/e2e/README.md) |
| Command | Action | |
| :----------------------------------------- | :------------------------------------------------------- | ------------------------------------------ |
|``make test[\#SpecificTestName]`` | run unit test(s) | |
|``make test-sqlite[\#SpecificTestName]``. | run [integration](tests/integration) test(s) for SQLite |[More details](tests/integration/README.md) |
|``make test-e2e-sqlite[\#SpecificTestName]``| run [end-to-end](tests/e2e) test(s) for SQLite |[More details](tests/e2e/README.md) |
## Translation

View File

@@ -766,7 +766,7 @@ generate-go: $(TAGS_PREREQ)
.PHONY: security-check
security-check:
go run $(GOVULNCHECK_PACKAGE) -show color ./...
go run $(GOVULNCHECK_PACKAGE) -show color ./... || true
$(EXECUTABLE): $(GO_SOURCES) $(TAGS_PREREQ)
ifneq ($(and $(STATIC),$(findstring pam,$(TAGS))),)

View File

@@ -205,6 +205,7 @@ Gitea or set your environment appropriately.`, "")
PullRequestID: prID,
DeployKeyID: deployKeyID,
ActionPerm: int(actionPerm),
IsWiki: isWiki,
}
scanner := bufio.NewScanner(os.Stdin)
@@ -366,6 +367,7 @@ Gitea or set your environment appropriately.`, "")
GitPushOptions: pushOptions(),
PullRequestID: prID,
PushTrigger: repo_module.PushTrigger(os.Getenv(repo_module.EnvPushTrigger)),
IsWiki: isWiki,
}
oldCommitIDs := make([]string, hookBatchSize)
newCommitIDs := make([]string, hookBatchSize)
@@ -513,6 +515,7 @@ Gitea or set your environment appropriately.`, "")
reader := bufio.NewReader(os.Stdin)
repoUser := os.Getenv(repo_module.EnvRepoUsername)
isWiki, _ := strconv.ParseBool(os.Getenv(repo_module.EnvRepoIsWiki))
repoName := os.Getenv(repo_module.EnvRepoName)
pusherID, _ := strconv.ParseInt(os.Getenv(repo_module.EnvPusherID), 10, 64)
pusherName := os.Getenv(repo_module.EnvPusherName)
@@ -590,6 +593,7 @@ Gitea or set your environment appropriately.`, "")
UserName: pusherName,
UserID: pusherID,
GitPushOptions: make(map[string]string),
IsWiki: isWiki,
}
hookOptions.OldCommitIDs = make([]string, 0, hookBatchSize)
hookOptions.NewCommitIDs = make([]string, 0, hookBatchSize)

6
go.mod
View File

@@ -2,7 +2,7 @@ module code.gitea.io/gitea
go 1.25.0
toolchain go1.25.5
toolchain go1.25.8
// rfc5280 said: "The serial number is an integer assigned by the CA to each certificate."
// But some CAs use negative serial number, just relax the check. related:
@@ -58,7 +58,7 @@ require (
github.com/go-co-op/gocron v1.37.0
github.com/go-enry/go-enry/v2 v2.9.2
github.com/go-git/go-billy/v5 v5.6.2
github.com/go-git/go-git/v5 v5.16.3
github.com/go-git/go-git/v5 v5.16.5
github.com/go-ldap/ldap/v3 v3.4.11
github.com/go-redsync/redsync/v4 v4.13.0
github.com/go-sql-driver/mysql v1.9.3
@@ -181,7 +181,7 @@ require (
github.com/caddyserver/zerossl v0.1.3 // indirect
github.com/cention-sany/utf7 v0.0.0-20170124080048-26cad61bd60a // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/cloudflare/circl v1.6.1 // indirect
github.com/cloudflare/circl v1.6.3 // indirect
github.com/couchbase/go-couchbase v0.1.1 // indirect
github.com/couchbase/gomemcached v0.3.3 // indirect
github.com/couchbase/goutils v0.1.2 // indirect

8
go.sum
View File

@@ -231,8 +231,8 @@ github.com/chzyer/readline v1.5.1/go.mod h1:Eh+b79XXUwfKfcPLepksvw2tcLE/Ct21YObk
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
github.com/chzyer/test v1.0.0/go.mod h1:2JlltgoNkt4TW/z9V/IzDdFaMTM2JPIi26O1pF38GC8=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/cloudflare/circl v1.6.1 h1:zqIqSPIndyBh1bjLVVDHMPpVKqp8Su/V+6MeDzzQBQ0=
github.com/cloudflare/circl v1.6.1/go.mod h1:uddAzsPgqdMAYatqJ0lsjX1oECcQLIlRpzZh3pJrofs=
github.com/cloudflare/circl v1.6.3 h1:9GPOhQGF9MCYUeXyMYlqTR6a5gTrgR/fBLXvUgtVcg8=
github.com/cloudflare/circl v1.6.3/go.mod h1:2eXP6Qfat4O/Yhh8BznvKnJ+uzEoTQ6jVKJRn81BiS4=
github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
github.com/coreos/go-etcd v2.0.0+incompatible/go.mod h1:Jez6KQU2B/sWsbdaef3ED8NzMklzPG4d5KIOhIy30Tk=
github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
@@ -339,8 +339,8 @@ github.com/go-git/go-billy/v5 v5.6.2 h1:6Q86EsPXMa7c3YZ3aLAQsMA0VlWmy43r6FHqa/UN
github.com/go-git/go-billy/v5 v5.6.2/go.mod h1:rcFC2rAsp/erv7CMz9GczHcuD0D32fWzH+MJAU+jaUU=
github.com/go-git/go-git-fixtures/v4 v4.3.2-0.20231010084843-55a94097c399 h1:eMje31YglSBqCdIqdhKBW8lokaMrL3uTkpGYlE2OOT4=
github.com/go-git/go-git-fixtures/v4 v4.3.2-0.20231010084843-55a94097c399/go.mod h1:1OCfN199q1Jm3HZlxleg+Dw/mwps2Wbk9frAWm+4FII=
github.com/go-git/go-git/v5 v5.16.3 h1:Z8BtvxZ09bYm/yYNgPKCzgWtaRqDTgIKRgIRHBfU6Z8=
github.com/go-git/go-git/v5 v5.16.3/go.mod h1:4Ge4alE/5gPs30F2H1esi2gPd69R0C39lolkucHBOp8=
github.com/go-git/go-git/v5 v5.16.5 h1:mdkuqblwr57kVfXri5TTH+nMFLNUxIj9Z7F5ykFbw5s=
github.com/go-git/go-git/v5 v5.16.5/go.mod h1:QOMLpNf1qxuSY4StA/ArOdfFR2TrKEjJiye2kel2m+M=
github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU=
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
github.com/go-ini/ini v1.67.0 h1:z6ZrTEZqSWOTyH2FlglNbNgARyHG8oLW9gMELqKr06A=

View File

@@ -14,6 +14,7 @@ import (
"net/url"
"slices"
"strings"
"time"
"code.gitea.io/gitea/models/db"
"code.gitea.io/gitea/modules/container"
@@ -27,6 +28,11 @@ import (
"xorm.io/xorm"
)
// Authorization codes should expire within 10 minutes per https://datatracker.ietf.org/doc/html/rfc6749#section-4.1.2
const oauth2AuthorizationCodeValidity = 10 * time.Minute
var ErrOAuth2AuthorizationCodeInvalidated = errors.New("oauth2 authorization code already invalidated")
// OAuth2Application represents an OAuth2 client (RFC 6749)
type OAuth2Application struct {
ID int64 `xorm:"pk autoincr"`
@@ -386,6 +392,14 @@ func (code *OAuth2AuthorizationCode) TableName() string {
return "oauth2_authorization_code"
}
// IsExpired reports whether the authorization code is expired.
func (code *OAuth2AuthorizationCode) IsExpired() bool {
if code.ValidUntil.IsZero() {
return true
}
return code.ValidUntil <= timeutil.TimeStampNow()
}
// GenerateRedirectURI generates a redirect URI for a successful authorization request. State will be used if not empty.
func (code *OAuth2AuthorizationCode) GenerateRedirectURI(state string) (*url.URL, error) {
redirect, err := url.Parse(code.RedirectURI)
@@ -403,8 +417,14 @@ func (code *OAuth2AuthorizationCode) GenerateRedirectURI(state string) (*url.URL
// Invalidate deletes the auth code from the database to invalidate this code
func (code *OAuth2AuthorizationCode) Invalidate(ctx context.Context) error {
_, err := db.GetEngine(ctx).ID(code.ID).NoAutoCondition().Delete(code)
return err
affected, err := db.GetEngine(ctx).ID(code.ID).NoAutoCondition().Delete(code)
if err != nil {
return err
}
if affected == 0 {
return ErrOAuth2AuthorizationCodeInvalidated
}
return nil
}
// ValidateCodeChallenge validates the given verifier against the saved code challenge. This is part of the PKCE implementation.
@@ -472,6 +492,7 @@ func (grant *OAuth2Grant) GenerateNewAuthorizationCode(ctx context.Context, redi
// for code scanners to grab sensitive tokens.
codeSecret := "gta_" + base32Lower.EncodeToString(rBytes)
validUntil := time.Now().Add(oauth2AuthorizationCodeValidity)
code = &OAuth2AuthorizationCode{
Grant: grant,
GrantID: grant.ID,
@@ -479,6 +500,7 @@ func (grant *OAuth2Grant) GenerateNewAuthorizationCode(ctx context.Context, redi
Code: codeSecret,
CodeChallenge: codeChallenge,
CodeChallengeMethod: codeChallengeMethod,
ValidUntil: timeutil.TimeStamp(validUntil.Unix()),
}
if err := db.Insert(ctx, code); err != nil {
return nil, err

View File

@@ -5,13 +5,45 @@ package auth_test
import (
"testing"
"time"
auth_model "code.gitea.io/gitea/models/auth"
"code.gitea.io/gitea/models/unittest"
"code.gitea.io/gitea/modules/timeutil"
"github.com/stretchr/testify/assert"
)
func TestOAuth2AuthorizationCodeValidity(t *testing.T) {
assert.NoError(t, unittest.PrepareTestDatabase())
t.Run("GenerateSetsValidUntil", func(t *testing.T) {
grant := unittest.AssertExistsAndLoadBean(t, &auth_model.OAuth2Grant{ID: 1})
expectedValidUntil := timeutil.TimeStamp(time.Now().Unix() + 600)
code, err := grant.GenerateNewAuthorizationCode(t.Context(), "http://127.0.0.1/", "", "")
assert.NoError(t, err)
assert.Equal(t, expectedValidUntil, code.ValidUntil)
assert.False(t, code.IsExpired())
assert.NoError(t, code.Invalidate(t.Context()))
})
t.Run("Expired", func(t *testing.T) {
defer timeutil.MockSet(time.Unix(2, 0).UTC())()
code := &auth_model.OAuth2AuthorizationCode{ValidUntil: timeutil.TimeStamp(1)}
assert.True(t, code.IsExpired())
})
t.Run("InvalidateTwice", func(t *testing.T) {
code, err := auth_model.GetOAuth2AuthorizationByCode(t.Context(), "authcode")
assert.NoError(t, err)
if assert.NotNil(t, code) {
assert.NoError(t, code.Invalidate(t.Context()))
assert.ErrorIs(t, code.Invalidate(t.Context()), auth_model.ErrOAuth2AuthorizationCodeInvalidated)
}
})
}
func TestOAuth2Application_GenerateClientSecret(t *testing.T) {
assert.NoError(t, unittest.PrepareTestDatabase())
app := unittest.AssertExistsAndLoadBean(t, &auth_model.OAuth2Application{ID: 1})

View File

@@ -153,3 +153,16 @@
download_count: 0
size: 0
created_unix: 946684800
-
id: 13
uuid: a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a23
repo_id: 1
issue_id: 0
release_id: 4
uploader_id: 2
comment_id: 0
name: draft-attach
download_count: 0
size: 0
created_unix: 946684800

View File

@@ -397,10 +397,16 @@ func RenameBranch(ctx context.Context, repo *repo_model.Repository, from, to str
if protectedBranch != nil {
// there is a protect rule for this branch
protectedBranch.RuleName = to
if _, err = sess.ID(protectedBranch.ID).Cols("branch_name").Update(protectedBranch); err != nil {
existingRule, err := GetProtectedBranchRuleByName(ctx, repo.ID, to)
if err != nil {
return err
}
if existingRule == nil || existingRule.ID == protectedBranch.ID {
protectedBranch.RuleName = to
if _, err = sess.ID(protectedBranch.ID).Cols("branch_name").Update(protectedBranch); err != nil {
return err
}
}
} else {
// some glob protect rules may match this branch
protected, err := IsBranchProtected(ctx, repo.ID, from)
@@ -444,7 +450,7 @@ func RenameBranch(ctx context.Context, repo *repo_model.Repository, from, to str
type FindRecentlyPushedNewBranchesOptions struct {
Repo *repo_model.Repository
BaseRepo *repo_model.Repository
CommitAfterUnix int64
PushedAfterUnix int64
MaxCount int
}
@@ -454,11 +460,11 @@ type RecentlyPushedNewBranch struct {
BranchDisplayName string
BranchLink string
BranchCompareURL string
CommitTime timeutil.TimeStamp
PushedTime timeutil.TimeStamp
}
// FindRecentlyPushedNewBranches return at most 2 new branches pushed by the user in 2 hours which has no opened PRs created
// if opts.CommitAfterUnix is 0, we will find the branches that were committed to in the last 2 hours
// if opts.PushedAfterUnix is 0, we will find the branches that were pushed in the last 2 hours
// if opts.ListOptions is not set, we will only display top 2 latest branches.
// Protected branches will be skipped since they are unlikely to be used to create new PRs.
func FindRecentlyPushedNewBranches(ctx context.Context, doer *user_model.User, opts FindRecentlyPushedNewBranchesOptions) ([]*RecentlyPushedNewBranch, error) {
@@ -486,8 +492,8 @@ func FindRecentlyPushedNewBranches(ctx context.Context, doer *user_model.User, o
}
repoIDs := builder.Select("id").From("repository").Where(repoCond)
if opts.CommitAfterUnix == 0 {
opts.CommitAfterUnix = time.Now().Add(-time.Hour * 2).Unix()
if opts.PushedAfterUnix == 0 {
opts.PushedAfterUnix = time.Now().Add(-time.Hour * 2).Unix()
}
baseBranch, err := GetBranch(ctx, opts.BaseRepo.ID, opts.BaseRepo.DefaultBranch)
@@ -503,7 +509,7 @@ func FindRecentlyPushedNewBranches(ctx context.Context, doer *user_model.User, o
"pusher_id": doer.ID,
"is_deleted": false,
},
builder.Gte{"commit_time": opts.CommitAfterUnix},
builder.Gte{"updated_unix": opts.PushedAfterUnix},
builder.In("repo_id", repoIDs),
// newly created branch have no changes, so skip them
builder.Neq{"commit_id": baseBranch.CommitID},
@@ -556,7 +562,7 @@ func FindRecentlyPushedNewBranches(ctx context.Context, doer *user_model.User, o
BranchName: branch.Name,
BranchLink: fmt.Sprintf("%s/src/branch/%s", branch.Repo.Link(), util.PathEscapeSegments(branch.Name)),
BranchCompareURL: branch.Repo.ComposeBranchCompareURL(opts.BaseRepo, branch.Name),
CommitTime: branch.CommitTime,
PushedTime: branch.UpdatedUnix,
})
}
if len(newBranches) == opts.MaxCount {

View File

@@ -6,14 +6,17 @@ package git_test
import (
"context"
"testing"
"time"
"code.gitea.io/gitea/models/db"
git_model "code.gitea.io/gitea/models/git"
issues_model "code.gitea.io/gitea/models/issues"
repo_model "code.gitea.io/gitea/models/repo"
"code.gitea.io/gitea/models/unittest"
user_model "code.gitea.io/gitea/models/user"
"code.gitea.io/gitea/modules/git"
"code.gitea.io/gitea/modules/optional"
"code.gitea.io/gitea/modules/timeutil"
"github.com/stretchr/testify/assert"
)
@@ -63,6 +66,36 @@ func TestGetDeletedBranch(t *testing.T) {
assert.NotNil(t, getDeletedBranch(t, firstBranch))
}
func TestFindRecentlyPushedNewBranchesUsesPushTime(t *testing.T) {
assert.NoError(t, unittest.PrepareTestDatabase())
repo := unittest.AssertExistsAndLoadBean(t, &repo_model.Repository{ID: 10})
doer := unittest.AssertExistsAndLoadBean(t, &user_model.User{ID: 12})
branch := unittest.AssertExistsAndLoadBean(t, &git_model.Branch{RepoID: repo.ID, Name: "outdated-new-branch"})
commitUnix := time.Now().Add(-3 * time.Hour).Unix()
pushUnix := time.Now().Add(-30 * time.Minute).Unix()
_, err := db.GetEngine(t.Context()).Exec(
"UPDATE branch SET commit_time = ?, updated_unix = ? WHERE id = ?",
commitUnix,
pushUnix,
branch.ID,
)
assert.NoError(t, err)
branches, err := git_model.FindRecentlyPushedNewBranches(t.Context(), doer, git_model.FindRecentlyPushedNewBranchesOptions{
Repo: repo,
BaseRepo: repo,
PushedAfterUnix: time.Now().Add(-time.Hour).Unix(),
MaxCount: 1,
})
assert.NoError(t, err)
if assert.Len(t, branches, 1) {
assert.Equal(t, branch.Name, branches[0].BranchName)
assert.Equal(t, timeutil.TimeStamp(pushUnix), branches[0].PushedTime)
}
}
func TestDeletedBranchLoadUser(t *testing.T) {
assert.NoError(t, unittest.PrepareTestDatabase())
@@ -159,6 +192,53 @@ func TestRenameBranch(t *testing.T) {
})
}
func TestRenameBranchProtectedRuleConflict(t *testing.T) {
assert.NoError(t, unittest.PrepareTestDatabase())
repo1 := unittest.AssertExistsAndLoadBean(t, &repo_model.Repository{ID: 1})
master := unittest.AssertExistsAndLoadBean(t, &git_model.Branch{RepoID: repo1.ID, Name: "master"})
devBranch := &git_model.Branch{
RepoID: repo1.ID,
Name: "dev",
CommitID: master.CommitID,
CommitMessage: master.CommitMessage,
CommitTime: master.CommitTime,
PusherID: master.PusherID,
}
assert.NoError(t, db.Insert(t.Context(), devBranch))
pbDev := git_model.ProtectedBranch{
RepoID: repo1.ID,
RuleName: "dev",
CanPush: true,
}
assert.NoError(t, git_model.UpdateProtectBranch(t.Context(), repo1, &pbDev, git_model.WhitelistOptions{}))
pbMain := git_model.ProtectedBranch{
RepoID: repo1.ID,
RuleName: "main",
CanPush: true,
}
assert.NoError(t, git_model.UpdateProtectBranch(t.Context(), repo1, &pbMain, git_model.WhitelistOptions{}))
assert.NoError(t, git_model.RenameBranch(t.Context(), repo1, "dev", "main", func(ctx context.Context, isDefault bool) error {
return nil
}))
unittest.AssertNotExistsBean(t, &git_model.Branch{RepoID: repo1.ID, Name: "dev"})
unittest.AssertExistsAndLoadBean(t, &git_model.Branch{RepoID: repo1.ID, Name: "main"})
protectedDev, err := git_model.GetProtectedBranchRuleByName(t.Context(), repo1.ID, "dev")
assert.NoError(t, err)
assert.NotNil(t, protectedDev)
assert.Equal(t, "dev", protectedDev.RuleName)
protectedMainByID, err := git_model.GetProtectedBranchRuleByID(t.Context(), repo1.ID, pbMain.ID)
assert.NoError(t, err)
assert.NotNil(t, protectedMainByID)
assert.Equal(t, "main", protectedMainByID.RuleName)
}
func TestOnlyGetDeletedBranchOnCorrectRepo(t *testing.T) {
assert.NoError(t, unittest.PrepareTestDatabase())

View File

@@ -343,15 +343,12 @@ func IterateRepositoryIDsWithLFSMetaObjects(ctx context.Context, f func(ctx cont
// IterateLFSMetaObjectsForRepoOptions provides options for IterateLFSMetaObjectsForRepo
type IterateLFSMetaObjectsForRepoOptions struct {
OlderThan timeutil.TimeStamp
UpdatedLessRecentlyThan timeutil.TimeStamp
OrderByUpdated bool
LoopFunctionAlwaysUpdates bool
OlderThan timeutil.TimeStamp
UpdatedLessRecentlyThan timeutil.TimeStamp
}
// IterateLFSMetaObjectsForRepo provides a iterator for LFSMetaObjects per Repo
func IterateLFSMetaObjectsForRepo(ctx context.Context, repoID int64, f func(context.Context, *LFSMetaObject, int64) error, opts *IterateLFSMetaObjectsForRepoOptions) error {
var start int
batchSize := setting.Database.IterateBufferSize
engine := db.GetEngine(ctx)
type CountLFSMetaObject struct {
@@ -359,7 +356,7 @@ func IterateLFSMetaObjectsForRepo(ctx context.Context, repoID int64, f func(cont
LFSMetaObject `xorm:"extends"`
}
id := int64(0)
lastID := int64(0)
for {
beans := make([]*CountLFSMetaObject, 0, batchSize)
@@ -372,29 +369,23 @@ func IterateLFSMetaObjectsForRepo(ctx context.Context, repoID int64, f func(cont
if !opts.UpdatedLessRecentlyThan.IsZero() {
sess.And("`lfs_meta_object`.updated_unix < ?", opts.UpdatedLessRecentlyThan)
}
sess.GroupBy("`lfs_meta_object`.id")
if opts.OrderByUpdated {
sess.OrderBy("`lfs_meta_object`.updated_unix ASC")
} else {
sess.And("`lfs_meta_object`.id > ?", id)
sess.OrderBy("`lfs_meta_object`.id ASC")
}
if err := sess.Limit(batchSize, start).Find(&beans); err != nil {
sess.GroupBy("`lfs_meta_object`.id").
And("`lfs_meta_object`.id > ?", lastID).
OrderBy("`lfs_meta_object`.id ASC")
if err := sess.Limit(batchSize).Find(&beans); err != nil {
return err
}
if len(beans) == 0 {
return nil
}
if !opts.LoopFunctionAlwaysUpdates {
start += len(beans)
}
for _, bean := range beans {
if err := f(ctx, &bean.LFSMetaObject, bean.Count); err != nil {
return err
}
}
id = beans[len(beans)-1].ID
lastID = beans[len(beans)-1].ID
}
}

61
models/git/lfs_test.go Normal file
View File

@@ -0,0 +1,61 @@
// Copyright 2026 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package git_test
import (
"bytes"
"context"
"strconv"
"testing"
"time"
"code.gitea.io/gitea/models/db"
git_model "code.gitea.io/gitea/models/git"
repo_model "code.gitea.io/gitea/models/repo"
"code.gitea.io/gitea/models/unittest"
"code.gitea.io/gitea/modules/lfs"
"code.gitea.io/gitea/modules/setting"
"code.gitea.io/gitea/modules/test"
"code.gitea.io/gitea/modules/timeutil"
"github.com/stretchr/testify/assert"
)
func TestIterateLFSMetaObjectsForRepoUpdatesDoNotSkip(t *testing.T) {
assert.NoError(t, unittest.PrepareTestDatabase())
ctx := t.Context()
repo, err := repo_model.GetRepositoryByOwnerAndName(ctx, "user2", "repo1")
assert.NoError(t, err)
defer test.MockVariableValue(&setting.Database.IterateBufferSize, 1)()
created := make([]*git_model.LFSMetaObject, 0, 3)
for i := range 3 {
content := []byte("gitea-lfs-" + strconv.Itoa(i))
pointer, err := lfs.GeneratePointer(bytes.NewReader(content))
assert.NoError(t, err)
meta, err := git_model.NewLFSMetaObject(ctx, repo.ID, pointer)
assert.NoError(t, err)
created = append(created, meta)
}
iterated := make([]int64, 0, len(created))
cutoff := time.Now().Add(24 * time.Hour)
iterErr := git_model.IterateLFSMetaObjectsForRepo(ctx, repo.ID, func(ctx context.Context, meta *git_model.LFSMetaObject, count int64) error {
iterated = append(iterated, meta.ID)
_, err := db.GetEngine(ctx).ID(meta.ID).Cols("updated_unix").Update(&git_model.LFSMetaObject{
UpdatedUnix: timeutil.TimeStamp(time.Now().Unix()),
})
return err
}, &git_model.IterateLFSMetaObjectsForRepoOptions{
OlderThan: timeutil.TimeStamp(cutoff.Unix()),
UpdatedLessRecentlyThan: timeutil.TimeStamp(cutoff.Unix()),
})
assert.NoError(t, iterErr)
expected := []int64{created[0].ID, created[1].ID, created[2].ID}
assert.Equal(t, expected, iterated)
}

View File

@@ -692,7 +692,7 @@ func (c *Comment) LoadTime(ctx context.Context) error {
return nil
}
var err error
c.Time, err = GetTrackedTimeByID(ctx, c.TimeID)
c.Time, err = GetTrackedTimeByID(ctx, c.IssueID, c.TimeID)
return err
}

View File

@@ -311,13 +311,13 @@ func deleteTime(ctx context.Context, t *TrackedTime) error {
}
// GetTrackedTimeByID returns raw TrackedTime without loading attributes by id
func GetTrackedTimeByID(ctx context.Context, id int64) (*TrackedTime, error) {
func GetTrackedTimeByID(ctx context.Context, issueID, trackedTimeID int64) (*TrackedTime, error) {
time := new(TrackedTime)
has, err := db.GetEngine(ctx).ID(id).Get(time)
has, err := db.GetEngine(ctx).ID(trackedTimeID).Where("issue_id = ?", issueID).Get(time)
if err != nil {
return nil, err
} else if !has {
return nil, db.ErrNotExist{Resource: "tracked_time", ID: id}
return nil, db.ErrNotExist{Resource: "tracked_time", ID: trackedTimeID}
}
return time, nil
}

View File

@@ -43,13 +43,15 @@ func GetOrInsertBlob(ctx context.Context, pb *PackageBlob) (*PackageBlob, bool,
existing := &PackageBlob{}
has, err := e.Where(builder.Eq{
hashCond := builder.Eq{
"size": pb.Size,
"hash_md5": pb.HashMD5,
"hash_sha1": pb.HashSHA1,
"hash_sha256": pb.HashSHA256,
"hash_sha512": pb.HashSHA512,
}).Get(existing)
}
has, err := e.Where(hashCond).Get(existing)
if err != nil {
return nil, false, err
}
@@ -57,6 +59,11 @@ func GetOrInsertBlob(ctx context.Context, pb *PackageBlob) (*PackageBlob, bool,
return existing, true, nil
}
if _, err = e.Insert(pb); err != nil {
// Handle race condition: another request may have inserted the same blob
// between our SELECT and INSERT. Retry the SELECT to get the existing blob.
if has, _ = e.Where(hashCond).Get(existing); has {
return existing, true, nil
}
return nil, false, err
}
return pb, false, nil

View File

@@ -0,0 +1,51 @@
// Copyright 2026 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package packages
import (
"testing"
"code.gitea.io/gitea/models/unittest"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"golang.org/x/sync/errgroup"
)
func TestGetOrInsertBlobConcurrent(t *testing.T) {
require.NoError(t, unittest.PrepareTestDatabase())
testBlob := PackageBlob{
Size: 123,
HashMD5: "md5",
HashSHA1: "sha1",
HashSHA256: "sha256",
HashSHA512: "sha512",
}
const numGoroutines = 3
var wg errgroup.Group
results := make([]*PackageBlob, numGoroutines)
existed := make([]bool, numGoroutines)
for idx := range numGoroutines {
wg.Go(func() error {
blob := testBlob // Create a copy of the test blob for each goroutine
var err error
results[idx], existed[idx], err = GetOrInsertBlob(t.Context(), &blob)
return err
})
}
require.NoError(t, wg.Wait())
// then: all GetOrInsertBlob succeeds with the same blob ID, and only one indicates it did not exist before
existedCount := 0
assert.NotNil(t, results[0])
for i := range numGoroutines {
assert.Equal(t, results[0].ID, results[i].ID)
if existed[i] {
existedCount++
}
}
assert.Equal(t, numGoroutines-1, existedCount)
}

View File

@@ -276,17 +276,22 @@ func updateActivation(ctx context.Context, email *EmailAddress, activate bool) e
return UpdateUserCols(ctx, user, "rands")
}
func MakeActiveEmailPrimary(ctx context.Context, emailID int64) error {
return makeEmailPrimaryInternal(ctx, emailID, true)
func MakeActiveEmailPrimary(ctx context.Context, ownerID, emailID int64) error {
return makeEmailPrimaryInternal(ctx, ownerID, emailID, true)
}
func MakeInactiveEmailPrimary(ctx context.Context, emailID int64) error {
return makeEmailPrimaryInternal(ctx, emailID, false)
func MakeInactiveEmailPrimary(ctx context.Context, ownerID, emailID int64) error {
return makeEmailPrimaryInternal(ctx, ownerID, emailID, false)
}
func makeEmailPrimaryInternal(ctx context.Context, emailID int64, isActive bool) error {
func makeEmailPrimaryInternal(ctx context.Context, ownerID, emailID int64, isActive bool) error {
email := &EmailAddress{}
if has, err := db.GetEngine(ctx).ID(emailID).Where(builder.Eq{"is_activated": isActive}).Get(email); err != nil {
if has, err := db.GetEngine(ctx).ID(emailID).
Where(builder.Eq{
"uid": ownerID,
"is_activated": isActive,
}).
Get(email); err != nil {
return err
} else if !has {
return ErrEmailAddressNotExist{}
@@ -336,7 +341,7 @@ func ChangeInactivePrimaryEmail(ctx context.Context, uid int64, oldEmailAddr, ne
if err != nil {
return err
}
return MakeInactiveEmailPrimary(ctx, newEmail.ID)
return MakeInactiveEmailPrimary(ctx, uid, newEmail.ID)
})
}

View File

@@ -46,22 +46,22 @@ func TestIsEmailUsed(t *testing.T) {
func TestMakeEmailPrimary(t *testing.T) {
assert.NoError(t, unittest.PrepareTestDatabase())
err := user_model.MakeActiveEmailPrimary(t.Context(), 9999999)
err := user_model.MakeActiveEmailPrimary(t.Context(), 1, 9999999)
assert.Error(t, err)
assert.ErrorIs(t, err, user_model.ErrEmailAddressNotExist{})
email := unittest.AssertExistsAndLoadBean(t, &user_model.EmailAddress{Email: "user11@example.com"})
err = user_model.MakeActiveEmailPrimary(t.Context(), email.ID)
err = user_model.MakeActiveEmailPrimary(t.Context(), email.UID, email.ID)
assert.Error(t, err)
assert.ErrorIs(t, err, user_model.ErrEmailAddressNotExist{}) // inactive email is considered as not exist for "MakeActiveEmailPrimary"
email = unittest.AssertExistsAndLoadBean(t, &user_model.EmailAddress{Email: "user9999999@example.com"})
err = user_model.MakeActiveEmailPrimary(t.Context(), email.ID)
err = user_model.MakeActiveEmailPrimary(t.Context(), email.UID, email.ID)
assert.Error(t, err)
assert.True(t, user_model.IsErrUserNotExist(err))
email = unittest.AssertExistsAndLoadBean(t, &user_model.EmailAddress{Email: "user101@example.com"})
err = user_model.MakeActiveEmailPrimary(t.Context(), email.ID)
err = user_model.MakeActiveEmailPrimary(t.Context(), email.UID, email.ID)
assert.NoError(t, err)
user, _ := user_model.GetUserByID(t.Context(), int64(10))

View File

@@ -13,6 +13,7 @@ import (
"slices"
"strconv"
"strings"
"time"
"code.gitea.io/gitea/modules/git/gitcmd"
"code.gitea.io/gitea/modules/util"
@@ -41,6 +42,10 @@ type GrepOptions struct {
PathspecList []string
}
// grepSearchTimeout is the timeout for git grep search, it should be long enough to get results
// but not too long to cause performance issues
const grepSearchTimeout = 30 * time.Second
func GrepSearch(ctx context.Context, repo *Repository, search string, opts GrepOptions) ([]*GrepResult, error) {
stdoutReader, stdoutWriter, err := os.Pipe()
if err != nil {
@@ -85,9 +90,10 @@ func GrepSearch(ctx context.Context, repo *Repository, search string, opts GrepO
opts.MaxResultLimit = util.IfZero(opts.MaxResultLimit, 50)
stderr := bytes.Buffer{}
err = cmd.Run(ctx, &gitcmd.RunOpts{
Dir: repo.Path,
Stdout: stdoutWriter,
Stderr: &stderr,
Dir: repo.Path,
Stdout: stdoutWriter,
Stderr: &stderr,
Timeout: grepSearchTimeout,
PipelineFunc: func(ctx context.Context, cancel context.CancelFunc) error {
_ = stdoutWriter.Close()
defer stdoutReader.Close()

View File

@@ -24,7 +24,18 @@ func urlIsRelative(s string, u *url.URL) bool {
if len(s) > 1 && (s[0] == '/' || s[0] == '\\') && (s[1] == '/' || s[1] == '\\') {
return false
}
return u != nil && u.Scheme == "" && u.Host == ""
if u == nil {
return false // invalid URL
}
if u.Scheme != "" || u.Host != "" {
return false // absolute URL with scheme or host
}
// Now, the URL is likely a relative URL
// HINT: GOLANG-HTTP-REDIRECT-BUG: Golang security vulnerability: "http.Redirect" calls "path.Clean" and changes the meaning of a path
// For example, `/a/../\b` will be changed to `/\b`, then it hits the first checked pattern and becomes an open redirect to "{current-scheme}://b"
// For a valid relative URL, its "path" shouldn't contain `\` because such char must be escaped.
// So if the "path" contains `\`, it is not a valid relative URL, then we can prevent open redirect.
return !strings.Contains(u.Path, "\\")
}
// IsRelativeURL detects if a URL is relative (no scheme or host)
@@ -35,14 +46,14 @@ func IsRelativeURL(s string) bool {
func getRequestScheme(req *http.Request) string {
// https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Forwarded-Proto
if s := req.Header.Get("X-Forwarded-Proto"); s != "" {
return s
if proto, ok := parseForwardedProtoValue(req.Header.Get("X-Forwarded-Proto")); ok {
return proto
}
if s := req.Header.Get("X-Forwarded-Protocol"); s != "" {
return s
if proto, ok := parseForwardedProtoValue(req.Header.Get("X-Forwarded-Protocol")); ok {
return proto
}
if s := req.Header.Get("X-Url-Scheme"); s != "" {
return s
if proto, ok := parseForwardedProtoValue(req.Header.Get("X-Url-Scheme")); ok {
return proto
}
if s := req.Header.Get("Front-End-Https"); s != "" {
return util.Iif(s == "on", "https", "http")
@@ -53,6 +64,13 @@ func getRequestScheme(req *http.Request) string {
return ""
}
func parseForwardedProtoValue(val string) (string, bool) {
if val == "http" || val == "https" {
return val, true
}
return "", false
}
// GuessCurrentAppURL tries to guess the current full public URL (with sub-path) by http headers. It always has a '/' suffix, exactly the same as setting.AppURL
// TODO: should rename it to GuessCurrentPublicURL in the future
func GuessCurrentAppURL(ctx context.Context) string {

View File

@@ -23,6 +23,7 @@ func TestIsRelativeURL(t *testing.T) {
"foo",
"/",
"/foo?k=%20#abc",
"/foo?k=\\",
}
for _, s := range rel {
assert.True(t, IsRelativeURL(s), "rel = %q", s)
@@ -32,6 +33,8 @@ func TestIsRelativeURL(t *testing.T) {
"\\\\",
"/\\",
"\\/",
"/a/../\\b",
"/any\\thing",
"mailto:a@b.com",
"https://test.com",
}
@@ -44,6 +47,7 @@ func TestGuessCurrentHostURL(t *testing.T) {
defer test.MockVariableValue(&setting.AppURL, "http://cfg-host/sub/")()
defer test.MockVariableValue(&setting.AppSubURL, "/sub")()
headersWithProto := http.Header{"X-Forwarded-Proto": {"https"}}
maliciousProtoHeaders := http.Header{"X-Forwarded-Proto": {"http://attacker.host/?trash="}}
t.Run("Legacy", func(t *testing.T) {
defer test.MockVariableValue(&setting.PublicURLDetection, setting.PublicURLLegacy)()
@@ -57,6 +61,9 @@ func TestGuessCurrentHostURL(t *testing.T) {
// if "X-Forwarded-Proto" exists, then use it and "Host" header
ctx = context.WithValue(t.Context(), RequestContextKey, &http.Request{Host: "req-host:3000", Header: headersWithProto})
assert.Equal(t, "https://req-host:3000", GuessCurrentHostURL(ctx))
ctx = context.WithValue(t.Context(), RequestContextKey, &http.Request{Host: "req-host:3000", Header: maliciousProtoHeaders})
assert.Equal(t, "http://cfg-host", GuessCurrentHostURL(ctx))
})
t.Run("Auto", func(t *testing.T) {
@@ -73,6 +80,9 @@ func TestGuessCurrentHostURL(t *testing.T) {
ctx = context.WithValue(t.Context(), RequestContextKey, &http.Request{Host: "req-host:3000", Header: headersWithProto})
assert.Equal(t, "https://req-host:3000", GuessCurrentHostURL(ctx))
ctx = context.WithValue(t.Context(), RequestContextKey, &http.Request{Host: "req-host:3000", Header: maliciousProtoHeaders})
assert.Equal(t, "http://req-host:3000", GuessCurrentHostURL(ctx))
})
}

View File

@@ -34,7 +34,7 @@ var (
var (
fieldPattern = regexp.MustCompile(`\A\S+:`)
namePattern = regexp.MustCompile(`\A[a-zA-Z][a-zA-Z0-9\.]*[a-zA-Z0-9]\z`)
versionPattern = regexp.MustCompile(`\A[0-9]+(?:[.\-][0-9]+){1,3}\z`)
versionPattern = regexp.MustCompile(`\A[0-9]+(?:[.\-][0-9]+)+\z`)
authorReplacePattern = regexp.MustCompile(`[\[\(].+?[\]\)]`)
)

View File

@@ -128,13 +128,22 @@ func TestParseDescription(t *testing.T) {
})
t.Run("InvalidVersion", func(t *testing.T) {
for _, version := range []string{"1", "1 0", "1.2.3.4.5", "1-2-3-4-5", "1.", "1.0.", "1-", "1-0-"} {
for _, version := range []string{"1", "1 0", "1.", "1.0.", "1-", "1-0-"} {
p, err := ParseDescription(createDescription(packageName, version))
assert.Nil(t, p)
assert.ErrorIs(t, err, ErrInvalidVersion)
}
})
t.Run("ValidVersionManyComponents", func(t *testing.T) {
for _, version := range []string{"0.3.4.0.2", "1.2.3.4.5", "1-2-3-4-5"} {
p, err := ParseDescription(createDescription(packageName, version))
assert.NoError(t, err)
assert.NotNil(t, p)
assert.Equal(t, version, p.Version)
}
})
t.Run("Valid", func(t *testing.T) {
p, err := ParseDescription(createDescription(packageName, packageVersion))
assert.NoError(t, err)

View File

@@ -5,6 +5,7 @@ package setting
import (
"code.gitea.io/gitea/modules/container"
"code.gitea.io/gitea/modules/log"
)
// Admin settings
@@ -15,12 +16,33 @@ var Admin struct {
ExternalUserDisableFeatures container.Set[string]
}
var validUserFeatures = container.SetOf(
UserFeatureDeletion,
UserFeatureManageSSHKeys,
UserFeatureManageGPGKeys,
UserFeatureManageMFA,
UserFeatureManageCredentials,
UserFeatureChangeUsername,
UserFeatureChangeFullName,
)
func loadAdminFrom(rootCfg ConfigProvider) {
sec := rootCfg.Section("admin")
Admin.DisableRegularOrgCreation = sec.Key("DISABLE_REGULAR_ORG_CREATION").MustBool(false)
Admin.DefaultEmailNotification = sec.Key("DEFAULT_EMAIL_NOTIFICATIONS").MustString("enabled")
Admin.UserDisabledFeatures = container.SetOf(sec.Key("USER_DISABLED_FEATURES").Strings(",")...)
Admin.ExternalUserDisableFeatures = container.SetOf(sec.Key("EXTERNAL_USER_DISABLE_FEATURES").Strings(",")...).Union(Admin.UserDisabledFeatures)
for feature := range Admin.UserDisabledFeatures {
if !validUserFeatures.Contains(feature) {
log.Warn("USER_DISABLED_FEATURES contains unknown feature %q", feature)
}
}
for feature := range Admin.ExternalUserDisableFeatures {
if !validUserFeatures.Contains(feature) && !Admin.UserDisabledFeatures.Contains(feature) {
log.Warn("EXTERNAL_USER_DISABLE_FEATURES contains unknown feature %q", feature)
}
}
}
const (

View File

@@ -5,6 +5,7 @@ package storage
import (
"context"
"errors"
"fmt"
"io"
"net/url"
@@ -27,25 +28,32 @@ type LocalStorage struct {
// NewLocalStorage returns a local files
func NewLocalStorage(ctx context.Context, config *setting.Storage) (ObjectStorage, error) {
// prepare storage root path
if !filepath.IsAbs(config.Path) {
return nil, fmt.Errorf("LocalStorageConfig.Path should have been prepared by setting/storage.go and should be an absolute path, but not: %q", config.Path)
}
log.Info("Creating new Local Storage at %s", config.Path)
if err := os.MkdirAll(config.Path, os.ModePerm); err != nil {
return nil, err
return nil, fmt.Errorf("LocalStorage config.Path should have been prepared by setting/storage.go and should be an absolute path, but not: %q", config.Path)
}
storageRoot := util.FilePathJoinAbs(config.Path)
if config.TemporaryPath == "" {
config.TemporaryPath = filepath.Join(config.Path, "tmp")
// prepare storage temporary path
storageTmp := config.TemporaryPath
if storageTmp == "" {
storageTmp = filepath.Join(storageRoot, "tmp")
}
if !filepath.IsAbs(config.TemporaryPath) {
return nil, fmt.Errorf("LocalStorageConfig.TemporaryPath should be an absolute path, but not: %q", config.TemporaryPath)
if !filepath.IsAbs(storageTmp) {
return nil, fmt.Errorf("LocalStorage config.TemporaryPath should be an absolute path, but not: %q", config.TemporaryPath)
}
storageTmp = util.FilePathJoinAbs(storageTmp)
// create the storage root if not exist
log.Info("Creating new Local Storage at %s", storageRoot)
if err := os.MkdirAll(storageRoot, os.ModePerm); err != nil {
return nil, err
}
return &LocalStorage{
ctx: ctx,
dir: config.Path,
tmpdir: config.TemporaryPath,
dir: storageRoot,
tmpdir: storageTmp,
}, nil
}
@@ -108,9 +116,21 @@ func (l *LocalStorage) Stat(path string) (os.FileInfo, error) {
return os.Stat(l.buildLocalPath(path))
}
// Delete delete a file
func (l *LocalStorage) deleteEmptyParentDirs(localFullPath string) {
for parent := filepath.Dir(localFullPath); len(parent) > len(l.dir); parent = filepath.Dir(parent) {
if err := os.Remove(parent); err != nil {
// since the target file has been deleted, parent dir error is not related to the file deletion itself.
break
}
}
}
// Delete deletes the file in storage and removes the empty parent directories (if possible)
func (l *LocalStorage) Delete(path string) error {
return util.Remove(l.buildLocalPath(path))
localFullPath := l.buildLocalPath(path)
err := util.Remove(localFullPath)
l.deleteEmptyParentDirs(localFullPath)
return err
}
// URL gets the redirect URL to a file
@@ -118,34 +138,38 @@ func (l *LocalStorage) URL(path, name, _ string, reqParams url.Values) (*url.URL
return nil, ErrURLNotSupported
}
func (l *LocalStorage) normalizeWalkError(err error) error {
if errors.Is(err, os.ErrNotExist) {
// ignore it because the file may be deleted during the walk, and we don't care about it
return nil
}
return err
}
// IterateObjects iterates across the objects in the local storage
func (l *LocalStorage) IterateObjects(dirName string, fn func(path string, obj Object) error) error {
dir := l.buildLocalPath(dirName)
return filepath.WalkDir(dir, func(path string, d os.DirEntry, err error) error {
if err != nil {
return filepath.WalkDir(dir, func(path string, d os.DirEntry, errWalk error) error {
if err := l.ctx.Err(); err != nil {
return err
}
select {
case <-l.ctx.Done():
return l.ctx.Err()
default:
if errWalk != nil {
return l.normalizeWalkError(errWalk)
}
if path == l.dir {
return nil
}
if d.IsDir() {
if path == l.dir || d.IsDir() {
return nil
}
relPath, err := filepath.Rel(l.dir, path)
if err != nil {
return err
return l.normalizeWalkError(err)
}
obj, err := os.Open(path)
if err != nil {
return err
return l.normalizeWalkError(err)
}
defer obj.Close()
return fn(relPath, obj)
return fn(filepath.ToSlash(relPath), obj)
})
}

View File

@@ -4,11 +4,14 @@
package storage
import (
"os"
"strings"
"testing"
"code.gitea.io/gitea/modules/setting"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestBuildLocalPath(t *testing.T) {
@@ -53,6 +56,49 @@ func TestBuildLocalPath(t *testing.T) {
}
}
func TestLocalStorageDelete(t *testing.T) {
rootDir := t.TempDir()
st, err := NewLocalStorage(t.Context(), &setting.Storage{Path: rootDir})
require.NoError(t, err)
assertExists := func(t *testing.T, path string, exists bool) {
_, err = os.Stat(rootDir + "/" + path)
if exists {
require.NoError(t, err)
} else {
require.ErrorIs(t, err, os.ErrNotExist)
}
}
_, err = st.Save("dir/sub1/1-a.txt", strings.NewReader(""), -1)
require.NoError(t, err)
_, err = st.Save("dir/sub1/1-b.txt", strings.NewReader(""), -1)
require.NoError(t, err)
_, err = st.Save("dir/sub2/2-a.txt", strings.NewReader(""), -1)
require.NoError(t, err)
assertExists(t, "dir/sub1/1-a.txt", true)
assertExists(t, "dir/sub1/1-b.txt", true)
assertExists(t, "dir/sub2/2-a.txt", true)
require.NoError(t, st.Delete("dir/sub1/1-a.txt"))
assertExists(t, "dir/sub1", true)
assertExists(t, "dir/sub1/1-a.txt", false)
assertExists(t, "dir/sub1/1-b.txt", true)
assertExists(t, "dir/sub2/2-a.txt", true)
require.NoError(t, st.Delete("dir/sub1/1-b.txt"))
assertExists(t, ".", true)
assertExists(t, "dir/sub1", false)
assertExists(t, "dir/sub1/1-a.txt", false)
assertExists(t, "dir/sub1/1-b.txt", false)
assertExists(t, "dir/sub2/2-a.txt", true)
require.NoError(t, st.Delete("dir/sub2/2-a.txt"))
assertExists(t, ".", true)
assertExists(t, "dir", false)
}
func TestLocalStorageIterator(t *testing.T) {
testStorageIterator(t, setting.LocalStorageType, &setting.Storage{Path: t.TempDir()})
}

View File

@@ -68,7 +68,12 @@ type ObjectStorage interface {
Stat(path string) (os.FileInfo, error)
Delete(path string) error
URL(path, name, method string, reqParams url.Values) (*url.URL, error)
IterateObjects(path string, iterator func(path string, obj Object) error) error
// IterateObjects calls the iterator function for each object in the storage with the given path as prefix
// The "fullPath" argument in callback is the full path in this storage.
// * IterateObjects("", ...): iterate all objects in this storage
// * IterateObjects("sub-path", ...): iterate all objects with "sub-path" as prefix in this storage, the "fullPath" will be like "sub-path/xxx"
IterateObjects(basePath string, iterator func(fullPath string, obj Object) error) error
}
// Copy copies a file from source ObjectStorage to dest ObjectStorage

View File

@@ -134,7 +134,7 @@ type CreateRepoOption struct {
// Whether the repository is private
Private bool `json:"private"`
// Label-Set to use
IssueLabels string `json:"issue_labels"`
IssueLabels string `json:"issue_labels" binding:"MaxSize(255)"`
// Whether the repository should be auto-initialized?
AutoInit bool `json:"auto_init"`
// Whether the repository is template
@@ -142,15 +142,15 @@ type CreateRepoOption struct {
// Gitignores to use
Gitignores string `json:"gitignores"`
// License to use
License string `json:"license"`
License string `json:"license" binding:"MaxSize(100)"`
// Readme of the repository to create
Readme string `json:"readme"`
Readme string `json:"readme" binding:"MaxSize(255)"`
// DefaultBranch of the repository (used when initializes and in template)
DefaultBranch string `json:"default_branch" binding:"GitRefName;MaxSize(100)"`
// TrustModel of the repository
// enum: default,collaborator,committer,collaboratorcommitter
TrustModel string `json:"trust_model"`
// ObjectFormatName of the underlying git repository
// ObjectFormatName of the underlying git repository, empty string for default (sha1)
// enum: sha1,sha256
ObjectFormatName string `json:"object_format_name" binding:"MaxSize(6)"`
}

View File

@@ -64,7 +64,7 @@ func PathJoinRelX(elem ...string) string {
return PathJoinRel(elems...)
}
const pathSeparator = string(os.PathSeparator)
const filepathSeparator = string(os.PathSeparator)
// FilePathJoinAbs joins the path elements into a single file path, each element is cleaned by filepath.Clean separately.
// All slashes/backslashes are converted to path separators before cleaning, the result only contains path separators.
@@ -75,30 +75,32 @@ const pathSeparator = string(os.PathSeparator)
// {`/foo`, ``, `bar`} => `/foo/bar`
// {`/foo`, `..`, `bar`} => `/foo/bar`
func FilePathJoinAbs(base string, sub ...string) string {
elems := make([]string, 1, len(sub)+1)
// POSIX filesystem can have `\` in file names. Windows: `\` and `/` are both used for path separators
// to keep the behavior consistent, we do not allow `\` in file names, replace all `\` with `/`
if isOSWindows() {
elems[0] = filepath.Clean(base)
} else {
elems[0] = filepath.Clean(strings.ReplaceAll(base, "\\", pathSeparator))
if !isOSWindows() {
base = strings.ReplaceAll(base, "\\", filepathSeparator)
}
if !filepath.IsAbs(elems[0]) {
// This shouldn't happen. If there is really necessary to pass in relative path, return the full path with filepath.Abs() instead
panic(fmt.Sprintf("FilePathJoinAbs: %q (for path %v) is not absolute, do not guess a relative path based on current working directory", elems[0], elems))
if !filepath.IsAbs(base) {
// This shouldn't happen. If it is really necessary to handle relative paths, use filepath.Abs() to get absolute paths first
panic(fmt.Sprintf("FilePathJoinAbs: %q (for path %v) is not absolute, do not guess a relative path based on current working directory", base, sub))
}
if len(sub) == 0 {
return filepath.Clean(base)
}
elems := make([]string, 1, len(sub)+1)
elems[0] = base
for _, s := range sub {
if s == "" {
continue
}
if isOSWindows() {
elems = append(elems, filepath.Clean(pathSeparator+s))
elems = append(elems, filepath.Clean(filepathSeparator+s))
} else {
elems = append(elems, filepath.Clean(pathSeparator+strings.ReplaceAll(s, "\\", pathSeparator)))
elems = append(elems, filepath.Clean(filepathSeparator+strings.ReplaceAll(s, "\\", filepathSeparator)))
}
}
// the elems[0] must be an absolute path, just join them together
// the elems[0] must be an absolute path, just join them together, and Join will also do Clean
return filepath.Join(elems...)
}
@@ -115,12 +117,72 @@ func IsDir(dir string) (bool, error) {
return false, err
}
func IsRegularFile(filePath string) (bool, error) {
f, err := os.Lstat(filePath)
if err == nil {
return f.Mode().IsRegular(), nil
var ErrNotRegularPathFile = errors.New("not a regular file")
// ReadRegularPathFile reads a file with given sub path in root dir.
// It returns error when the path is not a regular file, or any parent path is not a regular directory.
func ReadRegularPathFile(root, filePathIn string, limit int) ([]byte, error) {
pathFields := strings.Split(PathJoinRelX(filePathIn), "/")
targetPathBuilder := strings.Builder{}
targetPathBuilder.Grow(len(root) + len(filePathIn) + 2)
targetPathBuilder.WriteString(root)
targetPathString := root
for i, subPath := range pathFields {
targetPathBuilder.WriteByte(filepath.Separator)
targetPathBuilder.WriteString(subPath)
targetPathString = targetPathBuilder.String()
expectFile := i == len(pathFields)-1
st, err := os.Lstat(targetPathString)
if err != nil {
return nil, err
}
if expectFile && !st.Mode().IsRegular() || !expectFile && !st.Mode().IsDir() {
return nil, fmt.Errorf("%w: %s", ErrNotRegularPathFile, filePathIn)
}
}
return false, err
f, err := os.Open(targetPathString)
if err != nil {
return nil, err
}
defer f.Close()
return ReadWithLimit(f, limit)
}
// WriteRegularPathFile writes data to a file with given sub path in root dir, it creates parent directories if necessary.
// The file is created with fileMode, and the directories are created with dirMode.
// It returns error when the path already exists but is not a regular file, or any parent path is not a regular directory.
func WriteRegularPathFile(root, filePathIn string, data []byte, dirMode, fileMode os.FileMode) error {
pathFields := strings.Split(PathJoinRelX(filePathIn), "/")
targetPathBuilder := strings.Builder{}
targetPathBuilder.Grow(len(root) + len(filePathIn) + 2)
targetPathBuilder.WriteString(root)
targetPathString := root
for i, subPath := range pathFields {
targetPathBuilder.WriteByte(filepath.Separator)
targetPathBuilder.WriteString(subPath)
targetPathString = targetPathBuilder.String()
expectFile := i == len(pathFields)-1
st, err := os.Lstat(targetPathString)
if err == nil {
if expectFile && !st.Mode().IsRegular() || !expectFile && !st.Mode().IsDir() {
return fmt.Errorf("%w: %s", ErrNotRegularPathFile, filePathIn)
}
continue
}
if !os.IsNotExist(err) {
return err
}
if !expectFile {
if err = os.Mkdir(targetPathString, dirMode); err != nil {
return err
}
}
}
return os.WriteFile(targetPathString, data, fileMode)
}
// IsExist checks whether a file or directory exists.

View File

@@ -6,6 +6,7 @@ package util
import (
"net/url"
"os"
"path/filepath"
"runtime"
"testing"
@@ -230,3 +231,70 @@ func TestListDirRecursively(t *testing.T) {
require.NoError(t, err)
assert.ElementsMatch(t, []string{"d1/f-d1", "d1/s1/f-d1s1"}, res)
}
func TestReadWriteRegularPathFile(t *testing.T) {
const readLimit = 10000
tmpDir := t.TempDir()
rootDir := tmpDir + "/root"
_ = os.Mkdir(rootDir, 0o755)
_ = os.WriteFile(tmpDir+"/other-file", []byte("other-content"), 0o755)
_ = os.Mkdir(rootDir+"/real-dir", 0o755)
_ = os.WriteFile(rootDir+"/real-dir/real-file", []byte("dummy-content"), 0o644)
_ = os.Symlink(rootDir+"/real-dir", rootDir+"/link-dir")
_ = os.Symlink(rootDir+"/real-dir/real-file", rootDir+"/real-dir/link-file")
t.Run("Read", func(t *testing.T) {
content, err := os.ReadFile(filepath.Join(rootDir, "../other-file"))
require.NoError(t, err)
assert.Equal(t, "other-content", string(content))
content, err = ReadRegularPathFile(rootDir, "../other-file", readLimit)
require.ErrorIs(t, err, os.ErrNotExist)
assert.Empty(t, string(content))
content, err = ReadRegularPathFile(rootDir, "real-dir/real-file", readLimit)
require.NoError(t, err)
assert.Equal(t, "dummy-content", string(content))
_, err = ReadRegularPathFile(rootDir, "link-dir/real-file", readLimit)
require.ErrorIs(t, err, ErrNotRegularPathFile)
_, err = ReadRegularPathFile(rootDir, "real-dir/link-file", readLimit)
require.ErrorIs(t, err, ErrNotRegularPathFile)
_, err = ReadRegularPathFile(rootDir, "link-dir/link-file", readLimit)
require.ErrorIs(t, err, ErrNotRegularPathFile)
})
t.Run("Write", func(t *testing.T) {
assertFileContent := func(path, expected string) {
data, err := os.ReadFile(path)
if expected == "" {
assert.ErrorIs(t, err, os.ErrNotExist)
return
}
require.NoError(t, err)
assert.Equal(t, expected, string(data), "file content mismatch for %s", path)
}
err := WriteRegularPathFile(rootDir, "new-dir/new-file", []byte("new-content"), 0o755, 0o644)
require.NoError(t, err)
assertFileContent(rootDir+"/new-dir/new-file", "new-content")
err = WriteRegularPathFile(rootDir, "link-dir/real-file", []byte("new-content"), 0o755, 0o644)
require.ErrorIs(t, err, ErrNotRegularPathFile)
err = WriteRegularPathFile(rootDir, "link-dir/link-file", []byte("new-content"), 0o755, 0o644)
require.ErrorIs(t, err, ErrNotRegularPathFile)
err = WriteRegularPathFile(rootDir, "link-dir/new-file", []byte("new-content"), 0o755, 0o644)
require.ErrorIs(t, err, ErrNotRegularPathFile)
err = WriteRegularPathFile(rootDir, "real-dir/link-file", []byte("new-content"), 0o755, 0o644)
require.ErrorIs(t, err, ErrNotRegularPathFile)
err = WriteRegularPathFile(rootDir, "../other-file", []byte("new-content"), 0o755, 0o644)
require.NoError(t, err)
assertFileContent(rootDir+"/../other-file", "other-content")
assertFileContent(rootDir+"/other-file", "new-content")
err = WriteRegularPathFile(rootDir, "real-dir/real-file", []byte("changed-content"), 0o755, 0o644)
require.NoError(t, err)
assertFileContent(rootDir+"/real-dir/real-file", "changed-content")
})
}

View File

@@ -834,6 +834,7 @@ add_new_openid = Add New OpenID URI
add_email = Add Email Address
add_openid = Add OpenID URI
add_email_confirmation_sent = A confirmation email has been sent to "%s". Please check your inbox within the next %s to confirm your email address.
email_primary_not_found = The selected email address could not be found.
add_email_success = The new email address has been added.
email_preference_set_success = Email preference has been set successfully.
add_openid_success = The new OpenID address has been added.

View File

@@ -98,7 +98,7 @@
"eslint-plugin-wc": "3.0.1",
"globals": "16.4.0",
"happy-dom": "18.0.1",
"markdownlint-cli": "0.45.0",
"markdownlint-cli": "0.48.0",
"material-icon-theme": "5.27.0",
"nolyfill": "1.0.44",
"postcss-html": "1.8.0",
@@ -107,7 +107,7 @@
"stylelint-declaration-block-no-ignored-properties": "2.8.0",
"stylelint-declaration-strict-value": "1.10.11",
"stylelint-value-no-unknown-custom-properties": "6.0.1",
"svgo": "4.0.0",
"svgo": "4.0.1",
"typescript-eslint": "8.43.0",
"updates": "16.7.0",
"vite-string-plugin": "1.4.6",

225
pnpm-lock.yaml generated
View File

@@ -298,8 +298,8 @@ importers:
specifier: 18.0.1
version: 18.0.1
markdownlint-cli:
specifier: 0.45.0
version: 0.45.0
specifier: 0.48.0
version: 0.48.0
material-icon-theme:
specifier: 5.27.0
version: 5.27.0
@@ -325,8 +325,8 @@ importers:
specifier: 6.0.1
version: 6.0.1(stylelint@16.24.0(typescript@5.9.2))
svgo:
specifier: 4.0.0
version: 4.0.0
specifier: 4.0.1
version: 4.0.1
typescript-eslint:
specifier: 8.43.0
version: 8.43.0(eslint@9.35.0(jiti@2.5.1))(typescript@5.9.2)
@@ -1836,6 +1836,10 @@ packages:
balanced-match@2.0.0:
resolution: {integrity: sha512-1ugUSr8BHXRnK23KfuYS+gVMC3LB8QGH9W1iGtDPsNWoQbgtXSExkBu2aDR4epiGWZOjZsj6lDl/N/AqqTC3UA==}
balanced-match@4.0.4:
resolution: {integrity: sha512-BLrgEcRTwX2o6gGxGOCNyMvGSp35YofuYzw9h1IMTRmKqttAZZVU67bdb9Pr2vUHA8+j3i2tJfjO6C6+4myGTA==}
engines: {node: 18 || 20 || >=22}
base64-js@1.5.1:
resolution: {integrity: sha512-AKpaYlHn8t4SVbOHCy+b5+KKgvR4vrsD8vbvrbiQJps7fKDTkjkDry6ji0rUJjC0kzbNePLwzxq8iypo41qeWA==}
@@ -1855,6 +1859,10 @@ packages:
brace-expansion@2.0.2:
resolution: {integrity: sha512-Jt0vHyM+jmUBqojB7E1NIYadt0vI0Qxjxd2TErW94wDz+E2LAm5vKMXXwg6ZZBTHPuUlDgQHKXvjGBdfcF1ZDQ==}
brace-expansion@5.0.4:
resolution: {integrity: sha512-h+DEnpVvxmfVefa4jFbCf5HdH5YMDXRsmKflpf1pILZWRFlTbJpxeU55nJl4Smt5HQaGzg1o6RHFPJaOqnmBDg==}
engines: {node: 18 || 20 || >=22}
braces@3.0.3:
resolution: {integrity: sha512-yQbXgO/OSZVD2IsiLlro+7Hf6Q18EJrKSEsdoMzKePKXct3gvD8oLcOQdIzGupr5Fj+EDe8gO/lxc1BzfMpxvA==}
engines: {node: '>=8'}
@@ -2014,9 +2022,9 @@ packages:
resolution: {integrity: sha512-Vw8qHK3bZM9y/P10u3Vib8o/DdkvA2OtPtZvD871QKjy74Wj1WSKFILMPRPSdUSx5RFK1arlJzEtA4PkFgnbuA==}
engines: {node: '>=18'}
commander@13.1.0:
resolution: {integrity: sha512-/rFeCpNJQbhSZjGVwO9RFV3xPqbnERS8MmIQzCtD/zl6gpJuV/bMLuN92oG3F7d8oDEHHRrujSXNUr8fpjntKw==}
engines: {node: '>=18'}
commander@14.0.3:
resolution: {integrity: sha512-H+y0Jo/T1RZ9qPP4Eh1pkcQcLRglraJaSLoyOtHxu6AapkjWVCy2Sit1QQ4x3Dng8qDlSsZEet7g5Pq06MvTgw==}
engines: {node: '>=20'}
commander@2.20.3:
resolution: {integrity: sha512-GpVkmM8vF2vQUkj2LvZmD35JxeJOLCwJ9cUkugyk2nuhbv3+mJvpLYYt+0+USMxE+oj+ey/lJEnhZw75x/OMcQ==}
@@ -2104,6 +2112,10 @@ packages:
resolution: {integrity: sha512-0eW44TGN5SQXU1mWSkKwFstI/22X2bG1nYzZTYMAWjylYURhse752YgbE4Cx46AC+bAvI+/dYTPRk1LqSUnu6w==}
engines: {node: ^10 || ^12.20.0 || ^14.13.0 || >=15.0.0}
css-tree@3.2.1:
resolution: {integrity: sha512-X7sjQzceUhu1u7Y/ylrRZFU2FS6LRiFVp6rKLPg23y3x3c3DOKAwuXGDp+PAGjh6CSnCjYeAul8pcT8bAl+lSA==}
engines: {node: ^10 || ^12.20.0 || ^14.13.0 || >=15.0.0}
css-what@6.2.2:
resolution: {integrity: sha512-u/O3vwbptzhMs3L1fQE82ZSLHQQfto5gyZzwteVIEyeaY5Fc7R4dapF/BvRoSYFeqfBk4m0V1Vafq5Pjv25wvA==}
engines: {node: '>= 6'}
@@ -2308,8 +2320,17 @@ packages:
supports-color:
optional: true
decode-named-character-reference@1.2.0:
resolution: {integrity: sha512-c6fcElNV6ShtZXmsgNgFFV5tVX2PaV4g+MOAkb8eXHvn6sryJBrZa9r0zV6+dtTyoCKxtDy5tyQ5ZwQuidtd+Q==}
debug@4.4.3:
resolution: {integrity: sha512-RGwwWnwQvkVfavKVt22FGLw+xYSdzARwm0ru6DhTVA3umU5hZc28V3kO4stgYryrTlLpuvgI9GiijltAjNbcqA==}
engines: {node: '>=6.0'}
peerDependencies:
supports-color: '*'
peerDependenciesMeta:
supports-color:
optional: true
decode-named-character-reference@1.3.0:
resolution: {integrity: sha512-GtpQYB283KrPp6nRw50q3U9/VfOutZOe103qlN7BPP6Ad27xYnOIWv4lPzo8HCAL+mMZofJ9KEy30fq6MfaK6Q==}
decode-uri-component@0.2.2:
resolution: {integrity: sha512-FqUYQ+8o158GyGTrMFJms9qh3CqTKvAqgqsTnkLI8sKu0028orqBhxNMFkFen0zGyg6epACD32pjVk58ngIErQ==}
@@ -2675,6 +2696,10 @@ packages:
resolution: {integrity: sha512-ca9pw9fomFcKPvFLXhBKUK90ZvGibiGOvRJNbjljY7s7uq/5YO4BOzcYtJqExdx99rF6aAcnRxHmcUHcz6sQsg==}
engines: {node: '>=0.10'}
esquery@1.7.0:
resolution: {integrity: sha512-Ap6G0WQwcU/LHsvLwON1fAQX9Zp0A2Y6Y/cJBl9r/JbW90Zyg4/zbG6zzKa2OTALELarYHmKu0GhpM5EO+7T0g==}
engines: {node: '>=0.10'}
esrecurse@4.3.0:
resolution: {integrity: sha512-KmfKL3b6G+RXvP8N1vr3Tq1kL/oCFgn2NYXEtqP8/L3pKapUA4G8cFVaoF3SU323CD4XypR/ffioHmkti6/Tag==}
engines: {node: '>=4.0'}
@@ -2835,6 +2860,10 @@ packages:
resolution: {integrity: sha512-QZjmEOC+IT1uk6Rx0sX22V6uHWVwbdbxf1faPqJ1QhLdGgsRGCZoyaQBm/piRdJy/D2um6hM1UP7ZEeQ4EkP+Q==}
engines: {node: '>=18'}
get-east-asian-width@1.5.0:
resolution: {integrity: sha512-CQ+bEO+Tva/qlmw24dCejulK5pMzVnUOFOijVogd3KQs07HnRIgp8TGipvCCRT06xeYEbpbgwaCxglFyiuIcmA==}
engines: {node: '>=18'}
get-set-props@0.2.0:
resolution: {integrity: sha512-YCmOj+4YAeEB5Dd9jfp6ETdejMet4zSxXjNkgaa4npBEKRI9uDOGB5MmAdAgi2OoFGAKshYhCbmLq2DS03CgVA==}
engines: {node: '>=18.0.0'}
@@ -2860,11 +2889,6 @@ packages:
resolution: {integrity: sha512-7Bv8RF0k6xjo7d4A/PxYLbUCfb6c+Vpd2/mB2yRDlew7Jb5hEXiCD9ibfO7wpk8i4sevK6DFny9h7EYbM3/sHg==}
hasBin: true
glob@11.0.3:
resolution: {integrity: sha512-2Nim7dha1KVkaiF4q6Dj+ngPPMdfvLJEOpZk/jKiUAkqKebpGAWQXAq9z1xu9HKu5lWfqw/FASuccEjyznjPaA==}
engines: {node: 20 || >=22}
hasBin: true
glob@7.2.3:
resolution: {integrity: sha512-nFR0zLpU2YCaRxwoCJvL6UvCH2JFyFVIvwTLsIf21AuHlMskA1hhTdk+LlYJtOlYt9v6dvszD2BGRqBL+iQK9Q==}
deprecated: Glob versions prior to v9 are no longer supported
@@ -3102,10 +3126,6 @@ packages:
jackspeak@3.4.3:
resolution: {integrity: sha512-OGlZQpz2yfahA/Rd1Y8Cd9SIEsqvXkLVoSw/cgwhnhFMDbsQFeZYoJJ7bIZBS9BcamUW96asq/npPWugM+RQBw==}
jackspeak@4.1.1:
resolution: {integrity: sha512-zptv57P3GpL+O0I7VdMJNBZCu+BPHVQUk55Ft8/QCJjTVxrnJHuVuX/0Bl2A6/+2oyR/ZMEuFKwmzqqZ/U5nPQ==}
engines: {node: 20 || >=22}
jest-worker@27.5.1:
resolution: {integrity: sha512-7vuh85V5cdDofPyxn58nrPjBktZo0u9x1g8WtjQol+jZDaE+fhN+cIvTj11GndBnMnyfrUOG1sZQxCdjKh+DKg==}
engines: {node: '>= 10.13.0'}
@@ -3138,6 +3158,10 @@ packages:
resolution: {integrity: sha512-wpxZs9NoxZaJESJGIZTyDEaYpl0FKSA+FB9aJiyemKhMwkxQg63h4T1KJgUGHpTqPDNRcmmYLugrRjJlBtWvRA==}
hasBin: true
js-yaml@4.1.1:
resolution: {integrity: sha512-qQKT4zQxXl8lLwBtHMWwaTcGfFOZviOJet3Oy/xmGk2gZH677CJM9EvtfdSkgWcATZhj/55JZ0rmy3myCT5lsA==}
hasBin: true
jsdoc-type-pratt-parser@4.8.0:
resolution: {integrity: sha512-iZ8Bdb84lWRuGHamRXFyML07r21pcwBrLkHEuHgEY5UbCouBwv7ECknDRKzsQIXMiqpPymqtIf8TC/shYKB5rw==}
engines: {node: '>=12.0.0'}
@@ -3331,10 +3355,6 @@ packages:
lru-cache@10.4.3:
resolution: {integrity: sha512-JNAzZcXrCt42VGLuYz0zfAzDfAvJWW6AfYlDBQyDV5DClI2m5sAmK+OIO7s59XfsRsWHp02jAJrRadPRGTt6SQ==}
lru-cache@11.2.1:
resolution: {integrity: sha512-r8LA6i4LP4EeWOhqBaZZjDWwehd1xUJPCJd9Sv300H0ZmcUER4+JPh7bqqZeqs1o5pgtgvXm+d9UGrB5zZGDiQ==}
engines: {node: 20 || >=22}
magic-string@0.25.9:
resolution: {integrity: sha512-RmF0AsMzgt25qzqqLc1+MbHmhdx0ojF2Fvs4XnOqz2ZOBXzzkEwc/dJQZCYHAn7v1jbVOjAZfK8msRn4BxO4VQ==}
@@ -3344,17 +3364,17 @@ packages:
markdown-escape@2.0.0:
resolution: {integrity: sha512-Trz4v0+XWlwy68LJIyw3bLbsJiC8XAbRCKF9DbEtZjyndKOGVx6n+wNB0VfoRmY2LKboQLeniap3xrb6LGSJ8A==}
markdown-it@14.1.0:
resolution: {integrity: sha512-a54IwgWPaeBCAAsv13YgmALOF1elABB08FxO9i+r4VFk5Vl4pKokRPeX8u5TCgSsPi6ec1otfLjdOpVcgbpshg==}
markdown-it@14.1.1:
resolution: {integrity: sha512-BuU2qnTti9YKgK5N+IeMubp14ZUKUUw7yeJbkjtosvHiP0AZ5c8IAgEMk79D0eC8F23r4Ac/q8cAIFdm2FtyoA==}
hasBin: true
markdownlint-cli@0.45.0:
resolution: {integrity: sha512-GiWr7GfJLVfcopL3t3pLumXCYs8sgWppjIA1F/Cc3zIMgD3tmkpyZ1xkm1Tej8mw53B93JsDjgA3KOftuYcfOw==}
markdownlint-cli@0.48.0:
resolution: {integrity: sha512-NkZQNu2E0Q5qLEEHwWj674eYISTLD4jMHkBzDobujXd1kv+yCxi8jOaD/rZoQNW1FBBMMGQpuW5So8B51N/e0A==}
engines: {node: '>=20'}
hasBin: true
markdownlint@0.38.0:
resolution: {integrity: sha512-xaSxkaU7wY/0852zGApM8LdlIfGCW8ETZ0Rr62IQtAnUMlMuifsg09vWJcNYeL4f0anvr8Vo4ZQar8jGpV0btQ==}
markdownlint@0.40.0:
resolution: {integrity: sha512-UKybllYNheWac61Ia7T6fzuQNDZimFIpCg2w6hHjgV1Qu0w1TV0LlSgryUGzM0bkKQCBhy2FDhEELB73Kb0kAg==}
engines: {node: '>=20'}
marked@15.0.12:
@@ -3380,6 +3400,9 @@ packages:
mdn-data@2.12.2:
resolution: {integrity: sha512-IEn+pegP1aManZuckezWCO+XZQDplx1366JoVhTpMpBB1sPey/SbveZQUosKiKiGYjg1wH4pMlNgXbCiYgihQA==}
mdn-data@2.27.1:
resolution: {integrity: sha512-9Yubnt3e8A0OKwxYSXyhLymGW4sCufcLG6VdiDdUGVkPhpqLxlvP5vl1983gQjJl3tqbrM731mjaZaP68AgosQ==}
mdurl@2.0.0:
resolution: {integrity: sha512-Lf+9+2r+Tdp5wXDXC4PcIBjTDtq4UKjCPMQhKIuzpJNW0b96kVqSwW0bT7FhRSfmAiFYgP+SCRvdrDozfh0U5w==}
@@ -3494,6 +3517,10 @@ packages:
resolution: {integrity: sha512-IPZ167aShDZZUMdRk66cyQAW3qr0WzbHkPdMYa8bzZhlHhO3jALbKdxcaak7W9FfT2rZNpQuUu4Od7ILEpXSaw==}
engines: {node: 20 || >=22}
minimatch@10.2.4:
resolution: {integrity: sha512-oRjTw/97aTBN0RHbYCdtF1MQfvusSIBQM0IZEgzl6426+8jSC0nF1a/GmnVLpfB9yyr6g6FTqWqiZVbxrtaCIg==}
engines: {node: 18 || 20 || >=22}
minimatch@3.1.2:
resolution: {integrity: sha512-J7p63hRiAjw1NDEww1W7i37+ByIrOWO5XQQAzZ3VOcL0PNybwpfmV/N05zFAzwQ9USyEcX6t3UO+K5aqBQOIHw==}
@@ -3676,10 +3703,6 @@ packages:
resolution: {integrity: sha512-Xa4Nw17FS9ApQFJ9umLiJS4orGjm7ZzwUrwamcGQuHSzDyth9boKDaycYdDcZDuqYATXw4HFXgaqWTctW/v1HA==}
engines: {node: '>=16 || 14 >=14.18'}
path-scurry@2.0.0:
resolution: {integrity: sha512-ypGJsmGtdXUOeM5u93TyeIEfEhM6s+ljAhrk5vAvSx8uyY/02OvrZnA0YNGUrPXfpJMgI1ODd3nwz8Npx4O4cg==}
engines: {node: 20 || >=22}
path-type@4.0.0:
resolution: {integrity: sha512-gDKb8aZMDeD/tZWs9P6+q0J9Mwkdl6xMV8TjnGP3qJVJ06bdMgkbBlLU8IdfOsIsFz2BW1rNVT3XuNEl8zPAvw==}
engines: {node: '>=8'}
@@ -3997,8 +4020,9 @@ packages:
sax@1.2.4:
resolution: {integrity: sha512-NqVDv9TpANUjFm0N8uM5GxL36UgKi9/atZw+x7YFnQ8ckwFGKrl4xX4yWtrey3UJm5nP1kUbnYgLopqWNSRhWw==}
sax@1.4.1:
resolution: {integrity: sha512-+aWOz7yVScEGoKNd4PA10LZ8sk0A/z5+nXQG5giUO5rprX9jgYsTdov9qCchZiPIZezbZH+jRut8nPodFAX4Jg==}
sax@1.5.0:
resolution: {integrity: sha512-21IYA3Q5cQf089Z6tgaUTr7lDAyzoTPx5HRtbhsME8Udispad8dC/+sziTNugOEx54ilvatQ9YCzl4KQLPcRHA==}
engines: {node: '>=11.0.0'}
schema-utils@4.3.2:
resolution: {integrity: sha512-Gn/JaSk/Mt9gYubxTtSn/QCV4em9mpAPiR1rqy/Ocu19u/G9J5WWdNoUT4SiV6mFC3y6cxyFcFwdzPM3FgxGAQ==}
@@ -4017,6 +4041,11 @@ packages:
engines: {node: '>=10'}
hasBin: true
semver@7.7.4:
resolution: {integrity: sha512-vFKC2IEtQnVhpT78h1Yp8wzwrf8CM+MzKMHGJZfBtzhZNycRFnXsHk6E5TxIkkMsgNS7mdX3AGB7x2QM2di4lA==}
engines: {node: '>=10'}
hasBin: true
serialize-javascript@6.0.2:
resolution: {integrity: sha512-Saa1xPByTTq2gdeFZYLLo+RFE35NHZkAbqZeWNd3BpzppeVisAqpDjcp8dyf6uIvEqJRd46jemmyA4iFIeVk8g==}
@@ -4061,8 +4090,8 @@ packages:
resolution: {integrity: sha512-qMCMfhY040cVHT43K9BFygqYbUPFZKHOg7K73mtTWJRb8pyP3fzf4Ixd5SzdEJQ6MRUg/WBnOLxghZtKKurENQ==}
engines: {node: '>=10'}
smol-toml@1.3.4:
resolution: {integrity: sha512-UOPtVuYkzYGee0Bd2Szz8d2G3RfMfJ2t3qVdZUAozZyAk+a0Sxa+QKix0YCwjL/A1RR0ar44nCxaoN9FxdJGwA==}
smol-toml@1.6.0:
resolution: {integrity: sha512-4zemZi0HvTnYwLfrpk/CF9LOd9Lt87kAt50GnqhMpyF9U3poDAP2+iukq2bZsO/ufegbYehBkqINbsWxj4l4cw==}
engines: {node: '>= 18'}
solid-js@1.9.9:
@@ -4143,6 +4172,10 @@ packages:
resolution: {integrity: sha512-tsaTIkKW9b4N+AEj+SVA+WhJzV7/zMhcSu78mLKWSk7cXMOSHsBKFWUs0fWwq8QyK3MgJBQRX6Gbi4kYbdvGkQ==}
engines: {node: '>=18'}
string-width@8.1.0:
resolution: {integrity: sha512-Kxl3KJGb/gxkaUMOjRsQ8IrXiGW75O4E3RPjFIINOVH8AMl2SQ/yWdTzWwF3FevIX9LcMAjJW+GRwAlAbTSXdg==}
engines: {node: '>=20'}
strip-ansi@6.0.1:
resolution: {integrity: sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A==}
engines: {node: '>=8'}
@@ -4151,6 +4184,10 @@ packages:
resolution: {integrity: sha512-gmBGslpoQJtgnMAvOVqGZpEz9dyoKTCzy2nfz/n8aIFhN/jCE/rCmcxabB6jOOHV+0WNnylOxaxBQPSvcWklhA==}
engines: {node: '>=12'}
strip-ansi@7.2.0:
resolution: {integrity: sha512-yDPMNjp4WyfYBkHnjIRLfca1i6KMyGCtsVgoKe/z1+6vukgaENdgGBZt+ZmKPc4gavvEZ5OgHfHdrazhgNyG7w==}
engines: {node: '>=12'}
strip-bom@3.0.0:
resolution: {integrity: sha512-vavAMRXOgBVNF6nyEEmL3DBK19iRpDcoIwW+swQ+CbGiu7lju6t+JklA1MHweoWtadgt4ISVUsXLyDq34ddcwA==}
engines: {node: '>=4'}
@@ -4235,8 +4272,8 @@ packages:
svg-tags@1.0.0:
resolution: {integrity: sha512-ovssysQTa+luh7A5Weu3Rta6FJlFBBbInjOh722LIt6klpU2/HtdUbszju/G4devcvk8PGt7FCLv5wftu3THUA==}
svgo@4.0.0:
resolution: {integrity: sha512-VvrHQ+9uniE+Mvx3+C9IEe/lWasXCU0nXMY2kZeLrHNICuRiC8uMPyM14UEaMOFA5mhyQqEkB02VoQ16n3DLaw==}
svgo@4.0.1:
resolution: {integrity: sha512-XDpWUOPC6FEibaLzjfe0ucaV0YrOjYotGJO1WpF0Zd+n6ZGEQUsSugaoLq9QkEZtAfQIxT42UChcssDVPP3+/w==}
engines: {node: '>=16'}
hasBin: true
@@ -6292,6 +6329,8 @@ snapshots:
balanced-match@2.0.0: {}
balanced-match@4.0.4: {}
base64-js@1.5.1: {}
big.js@5.2.2: {}
@@ -6309,6 +6348,10 @@ snapshots:
dependencies:
balanced-match: 1.0.2
brace-expansion@5.0.4:
dependencies:
balanced-match: 4.0.4
braces@3.0.3:
dependencies:
fill-range: 7.1.1
@@ -6464,7 +6507,7 @@ snapshots:
commander@12.1.0: {}
commander@13.1.0: {}
commander@14.0.3: {}
commander@2.20.3: {}
@@ -6548,6 +6591,11 @@ snapshots:
mdn-data: 2.12.2
source-map-js: 1.2.1
css-tree@3.2.1:
dependencies:
mdn-data: 2.27.1
source-map-js: 1.2.1
css-what@6.2.2: {}
css@3.0.0:
@@ -6764,7 +6812,11 @@ snapshots:
dependencies:
ms: 2.1.3
decode-named-character-reference@1.2.0:
debug@4.4.3:
dependencies:
ms: 2.1.3
decode-named-character-reference@1.3.0:
dependencies:
character-entities: 2.0.2
@@ -7264,6 +7316,10 @@ snapshots:
dependencies:
estraverse: 5.3.0
esquery@1.7.0:
dependencies:
estraverse: 5.3.0
esrecurse@4.3.0:
dependencies:
estraverse: 5.3.0
@@ -7402,6 +7458,8 @@ snapshots:
get-east-asian-width@1.4.0: {}
get-east-asian-width@1.5.0: {}
get-set-props@0.2.0: {}
get-source@2.0.12:
@@ -7432,15 +7490,6 @@ snapshots:
package-json-from-dist: 1.0.1
path-scurry: 1.11.1
glob@11.0.3:
dependencies:
foreground-child: 3.3.1
jackspeak: 4.1.1
minimatch: 10.0.3
minipass: 7.1.2
package-json-from-dist: 1.0.1
path-scurry: 2.0.0
glob@7.2.3:
dependencies:
fs.realpath: 1.0.0
@@ -7647,10 +7696,6 @@ snapshots:
optionalDependencies:
'@pkgjs/parseargs': 0.11.0
jackspeak@4.1.1:
dependencies:
'@isaacs/cliui': 8.0.2
jest-worker@27.5.1:
dependencies:
'@types/node': 24.3.1
@@ -7675,6 +7720,10 @@ snapshots:
dependencies:
argparse: 2.0.1
js-yaml@4.1.1:
dependencies:
argparse: 2.0.1
jsdoc-type-pratt-parser@4.8.0: {}
jsep@1.4.0: {}
@@ -7833,8 +7882,6 @@ snapshots:
lru-cache@10.4.3: {}
lru-cache@11.2.1: {}
magic-string@0.25.9:
dependencies:
sourcemap-codec: 1.4.8
@@ -7845,7 +7892,7 @@ snapshots:
markdown-escape@2.0.0: {}
markdown-it@14.1.0:
markdown-it@14.1.1:
dependencies:
argparse: 2.0.1
entities: 4.5.0
@@ -7854,23 +7901,24 @@ snapshots:
punycode.js: 2.3.1
uc.micro: 2.1.0
markdownlint-cli@0.45.0:
markdownlint-cli@0.48.0:
dependencies:
commander: 13.1.0
glob: 11.0.3
commander: 14.0.3
deep-extend: 0.6.0
ignore: 7.0.5
js-yaml: 4.1.0
js-yaml: 4.1.1
jsonc-parser: 3.3.1
jsonpointer: 5.0.1
markdown-it: 14.1.0
markdownlint: 0.38.0
minimatch: 10.0.3
markdown-it: 14.1.1
markdownlint: 0.40.0
minimatch: 10.2.4
run-con: 1.3.2
smol-toml: 1.3.4
smol-toml: 1.6.0
tinyglobby: 0.2.15
transitivePeerDependencies:
- supports-color
markdownlint@0.38.0:
markdownlint@0.40.0:
dependencies:
micromark: 4.0.2
micromark-core-commonmark: 2.0.3
@@ -7880,6 +7928,7 @@ snapshots:
micromark-extension-gfm-table: 2.1.1
micromark-extension-math: 3.1.0
micromark-util-types: 2.0.2
string-width: 8.1.0
transitivePeerDependencies:
- supports-color
@@ -7900,6 +7949,8 @@ snapshots:
mdn-data@2.12.2: {}
mdn-data@2.27.1: {}
mdurl@2.0.0: {}
meow@13.2.0: {}
@@ -7935,7 +7986,7 @@ snapshots:
micromark-core-commonmark@2.0.3:
dependencies:
decode-named-character-reference: 1.2.0
decode-named-character-reference: 1.3.0
devlop: 1.1.0
micromark-factory-destination: 2.0.1
micromark-factory-label: 2.0.1
@@ -8086,8 +8137,8 @@ snapshots:
micromark@4.0.2:
dependencies:
'@types/debug': 4.1.12
debug: 4.4.1
decode-named-character-reference: 1.2.0
debug: 4.4.3
decode-named-character-reference: 1.3.0
devlop: 1.1.0
micromark-core-commonmark: 2.0.3
micromark-factory-space: 2.0.1
@@ -8126,6 +8177,10 @@ snapshots:
dependencies:
'@isaacs/brace-expansion': 5.0.0
minimatch@10.2.4:
dependencies:
brace-expansion: 5.0.4
minimatch@3.1.2:
dependencies:
brace-expansion: 1.1.12
@@ -8266,7 +8321,7 @@ snapshots:
'@types/unist': 2.0.11
character-entities-legacy: 3.0.0
character-reference-invalid: 2.0.1
decode-named-character-reference: 1.2.0
decode-named-character-reference: 1.3.0
is-alphanumerical: 2.0.1
is-decimal: 2.0.1
is-hexadecimal: 2.0.1
@@ -8295,11 +8350,6 @@ snapshots:
lru-cache: 10.4.3
minipass: 7.1.2
path-scurry@2.0.0:
dependencies:
lru-cache: 11.2.1
minipass: 7.1.2
path-type@4.0.0: {}
pathe@2.0.3: {}
@@ -8594,7 +8644,7 @@ snapshots:
sax@1.2.4: {}
sax@1.4.1: {}
sax@1.5.0: {}
schema-utils@4.3.2:
dependencies:
@@ -8613,6 +8663,8 @@ snapshots:
semver@7.7.2: {}
semver@7.7.4: {}
serialize-javascript@6.0.2:
dependencies:
randombytes: 2.1.0
@@ -8649,7 +8701,7 @@ snapshots:
astral-regex: 2.0.0
is-fullwidth-code-point: 3.0.0
smol-toml@1.3.4: {}
smol-toml@1.6.0: {}
solid-js@1.9.9:
dependencies:
@@ -8735,6 +8787,11 @@ snapshots:
get-east-asian-width: 1.4.0
strip-ansi: 7.1.2
string-width@8.1.0:
dependencies:
get-east-asian-width: 1.5.0
strip-ansi: 7.2.0
strip-ansi@6.0.1:
dependencies:
ansi-regex: 5.0.1
@@ -8743,6 +8800,10 @@ snapshots:
dependencies:
ansi-regex: 6.2.2
strip-ansi@7.2.0:
dependencies:
ansi-regex: 6.2.2
strip-bom@3.0.0: {}
strip-indent@4.1.0: {}
@@ -8861,15 +8922,15 @@ snapshots:
svg-tags@1.0.0: {}
svgo@4.0.0:
svgo@4.0.1:
dependencies:
commander: 11.1.0
css-select: 5.2.2
css-tree: 3.1.0
css-tree: 3.2.1
css-what: 6.2.2
csso: 5.0.5
picocolors: 1.1.1
sax: 1.4.1
sax: 1.5.0
svgson@5.3.1:
dependencies:
@@ -9207,13 +9268,13 @@ snapshots:
vue-eslint-parser@10.2.0(eslint@9.35.0(jiti@2.5.1)):
dependencies:
debug: 4.4.1
debug: 4.4.3
eslint: 9.35.0(jiti@2.5.1)
eslint-scope: 8.4.0
eslint-visitor-keys: 4.2.1
espree: 10.4.0
esquery: 1.6.0
semver: 7.7.2
esquery: 1.7.0
semver: 7.7.4
transitivePeerDependencies:
- supports-color

View File

@@ -241,7 +241,7 @@ func (ar artifactRoutes) uploadArtifact(ctx *ArtifactContext) {
}
// get upload file size
fileRealTotalSize, contentLength := getUploadFileSize(ctx)
fileRealTotalSize := getUploadFileSize(ctx)
// get artifact retention days
expiredDays := setting.Actions.ArtifactRetentionDays
@@ -265,17 +265,17 @@ func (ar artifactRoutes) uploadArtifact(ctx *ArtifactContext) {
return
}
// save chunk to storage, if success, return chunk stotal size
// save chunk to storage, if success, return chunks total size
// if artifact is not gzip when uploading, chunksTotalSize == fileRealTotalSize
// if artifact is gzip when uploading, chunksTotalSize < fileRealTotalSize
chunksTotalSize, err := saveUploadChunk(ar.fs, ctx, artifact, contentLength, runID)
chunksTotalSize, err := saveUploadChunkV3GetTotalSize(ar.fs, ctx, artifact, runID)
if err != nil {
log.Error("Error save upload chunk: %v", err)
ctx.HTTPError(http.StatusInternalServerError, "Error save upload chunk")
return
}
// update artifact size if zero or not match, over write artifact size
// update artifact size if zero or not match, overwrite artifact size
if artifact.FileSize == 0 ||
artifact.FileCompressedSize == 0 ||
artifact.FileSize != fileRealTotalSize ||

View File

@@ -12,7 +12,7 @@ import (
"fmt"
"hash"
"io"
"path/filepath"
"path"
"sort"
"strings"
"time"
@@ -20,18 +20,73 @@ import (
"code.gitea.io/gitea/models/actions"
"code.gitea.io/gitea/models/db"
"code.gitea.io/gitea/modules/log"
"code.gitea.io/gitea/modules/setting"
"code.gitea.io/gitea/modules/storage"
)
func saveUploadChunkBase(st storage.ObjectStorage, ctx *ArtifactContext,
artifact *actions.ActionArtifact,
contentSize, runID, start, end, length int64, checkMd5 bool,
) (int64, error) {
type saveUploadChunkOptions struct {
start int64
end *int64
checkMd5 bool
}
func makeTmpPathNameV3(runID int64) string {
return fmt.Sprintf("tmp-upload/run-%d", runID)
}
func makeTmpPathNameV4(runID int64) string {
return fmt.Sprintf("tmp-upload/run-%d-v4", runID)
}
func makeChunkFilenameV3(runID, artifactID, start int64, endPtr *int64) string {
var end int64
if endPtr != nil {
end = *endPtr
}
return fmt.Sprintf("%d-%d-%d-%d.chunk", runID, artifactID, start, end)
}
func parseChunkFileItemV3(st storage.ObjectStorage, fpath string) (*chunkFileItem, error) {
baseName := path.Base(fpath)
if !strings.HasSuffix(baseName, ".chunk") {
return nil, errSkipChunkFile
}
var item chunkFileItem
var unusedRunID int64
if _, err := fmt.Sscanf(baseName, "%d-%d-%d-%d.chunk", &unusedRunID, &item.ArtifactID, &item.Start, &item.End); err != nil {
return nil, err
}
item.Path = fpath
if item.End == 0 {
fi, err := st.Stat(item.Path)
if err != nil {
return nil, err
}
item.Size = fi.Size()
item.End = item.Start + item.Size - 1
} else {
item.Size = item.End - item.Start + 1
}
return &item, nil
}
func saveUploadChunkV3(st storage.ObjectStorage, ctx *ArtifactContext, artifact *actions.ActionArtifact,
runID int64, opts saveUploadChunkOptions,
) (writtenSize int64, retErr error) {
// build chunk store path
storagePath := fmt.Sprintf("tmp%d/%d-%d-%d-%d.chunk", runID, runID, artifact.ID, start, end)
storagePath := fmt.Sprintf("%s/%s", makeTmpPathNameV3(runID), makeChunkFilenameV3(runID, artifact.ID, opts.start, opts.end))
// "end" is optional, so "contentSize=-1" means read until EOF
contentSize := int64(-1)
if opts.end != nil {
contentSize = *opts.end - opts.start + 1
}
var r io.Reader = ctx.Req.Body
var hasher hash.Hash
if checkMd5 {
if opts.checkMd5 {
// use io.TeeReader to avoid reading all body to md5 sum.
// it writes data to hasher after reading end
// if hash is not matched, delete the read-end result
@@ -41,76 +96,81 @@ func saveUploadChunkBase(st storage.ObjectStorage, ctx *ArtifactContext,
// save chunk to storage
writtenSize, err := st.Save(storagePath, r, contentSize)
if err != nil {
return -1, fmt.Errorf("save chunk to storage error: %v", err)
return 0, fmt.Errorf("save chunk to storage error: %v", err)
}
var checkErr error
if checkMd5 {
defer func() {
if retErr != nil {
if err := st.Delete(storagePath); err != nil {
log.Error("Error deleting chunk: %s, %v", storagePath, err)
}
}
}()
if contentSize != -1 && writtenSize != contentSize {
return writtenSize, fmt.Errorf("writtenSize %d does not match contentSize %d", writtenSize, contentSize)
}
if opts.checkMd5 {
// check md5
reqMd5String := ctx.Req.Header.Get(artifactXActionsResultsMD5Header)
chunkMd5String := base64.StdEncoding.EncodeToString(hasher.Sum(nil))
log.Info("[artifact] check chunk md5, sum: %s, header: %s", chunkMd5String, reqMd5String)
log.Debug("[artifact] check chunk md5, sum: %s, header: %s", chunkMd5String, reqMd5String)
// if md5 not match, delete the chunk
if reqMd5String != chunkMd5String {
checkErr = errors.New("md5 not match")
return writtenSize, errors.New("md5 not match")
}
}
if writtenSize != contentSize {
checkErr = errors.Join(checkErr, fmt.Errorf("writtenSize %d not match contentSize %d", writtenSize, contentSize))
}
if checkErr != nil {
if err := st.Delete(storagePath); err != nil {
log.Error("Error deleting chunk: %s, %v", storagePath, err)
}
return -1, checkErr
}
log.Info("[artifact] save chunk %s, size: %d, artifact id: %d, start: %d, end: %d",
storagePath, contentSize, artifact.ID, start, end)
// return chunk total size
return length, nil
log.Debug("[artifact] save chunk %s, size: %d, artifact id: %d, start: %d, size: %d", storagePath, writtenSize, artifact.ID, opts.start, contentSize)
return writtenSize, nil
}
func saveUploadChunk(st storage.ObjectStorage, ctx *ArtifactContext,
artifact *actions.ActionArtifact,
contentSize, runID int64,
) (int64, error) {
func saveUploadChunkV3GetTotalSize(st storage.ObjectStorage, ctx *ArtifactContext, artifact *actions.ActionArtifact, runID int64) (totalSize int64, _ error) {
// parse content-range header, format: bytes 0-1023/146515
contentRange := ctx.Req.Header.Get("Content-Range")
start, end, length := int64(0), int64(0), int64(0)
if _, err := fmt.Sscanf(contentRange, "bytes %d-%d/%d", &start, &end, &length); err != nil {
log.Warn("parse content range error: %v, content-range: %s", err, contentRange)
return -1, fmt.Errorf("parse content range error: %v", err)
var start, end int64
if _, err := fmt.Sscanf(contentRange, "bytes %d-%d/%d", &start, &end, &totalSize); err != nil {
return 0, fmt.Errorf("parse content range error: %v", err)
}
return saveUploadChunkBase(st, ctx, artifact, contentSize, runID, start, end, length, true)
_, err := saveUploadChunkV3(st, ctx, artifact, runID, saveUploadChunkOptions{start: start, end: &end, checkMd5: true})
if err != nil {
return 0, err
}
return totalSize, nil
}
func appendUploadChunk(st storage.ObjectStorage, ctx *ArtifactContext,
artifact *actions.ActionArtifact,
start, contentSize, runID int64,
) (int64, error) {
end := start + contentSize - 1
return saveUploadChunkBase(st, ctx, artifact, contentSize, runID, start, end, contentSize, false)
// Returns uploaded length
func appendUploadChunkV3(st storage.ObjectStorage, ctx *ArtifactContext, artifact *actions.ActionArtifact, runID, start int64) (int64, error) {
opts := saveUploadChunkOptions{start: start}
if ctx.Req.ContentLength > 0 {
end := start + ctx.Req.ContentLength - 1
opts.end = &end
}
return saveUploadChunkV3(st, ctx, artifact, runID, opts)
}
type chunkFileItem struct {
RunID int64
ArtifactID int64
Start int64
End int64
Path string
// these offset/size related fields might be missing when parsing, they will be filled in the listing functions
Size int64
Start int64
End int64 // inclusive: Size=10, Start=0, End=9
ChunkName string // v4 only
}
func listChunksByRunID(st storage.ObjectStorage, runID int64) (map[int64][]*chunkFileItem, error) {
storageDir := fmt.Sprintf("tmp%d", runID)
func listV3UnorderedChunksMapByRunID(st storage.ObjectStorage, runID int64) (map[int64][]*chunkFileItem, error) {
storageDir := makeTmpPathNameV3(runID)
var chunks []*chunkFileItem
if err := st.IterateObjects(storageDir, func(fpath string, obj storage.Object) error {
baseName := filepath.Base(fpath)
// when read chunks from storage, it only contains storage dir and basename,
// no matter the subdirectory setting in storage config
item := chunkFileItem{Path: storageDir + "/" + baseName}
if _, err := fmt.Sscanf(baseName, "%d-%d-%d-%d.chunk", &item.RunID, &item.ArtifactID, &item.Start, &item.End); err != nil {
return fmt.Errorf("parse content range error: %v", err)
item, err := parseChunkFileItemV3(st, fpath)
if errors.Is(err, errSkipChunkFile) {
return nil
} else if err != nil {
return fmt.Errorf("unable to parse chunk name: %v", fpath)
}
chunks = append(chunks, &item)
chunks = append(chunks, item)
return nil
}); err != nil {
return nil, err
@@ -123,52 +183,78 @@ func listChunksByRunID(st storage.ObjectStorage, runID int64) (map[int64][]*chun
return chunksMap, nil
}
func listChunksByRunIDV4(st storage.ObjectStorage, runID, artifactID int64, blist *BlockList) ([]*chunkFileItem, error) {
storageDir := fmt.Sprintf("tmpv4%d", runID)
var chunks []*chunkFileItem
chunkMap := map[string]*chunkFileItem{}
dummy := &chunkFileItem{}
for _, name := range blist.Latest {
chunkMap[name] = dummy
func listOrderedChunksForArtifact(st storage.ObjectStorage, runID, artifactID int64, blist *BlockList) ([]*chunkFileItem, error) {
emptyListAsError := func(chunks []*chunkFileItem) ([]*chunkFileItem, error) {
if len(chunks) == 0 {
return nil, fmt.Errorf("no chunk found for artifact id: %d", artifactID)
}
return chunks, nil
}
storageDir := makeTmpPathNameV4(runID)
var chunks []*chunkFileItem
var chunkMapV4 map[string]*chunkFileItem
if blist != nil {
// make a dummy map for quick lookup of chunk names, the values are nil now and will be filled after iterating storage objects
chunkMapV4 = map[string]*chunkFileItem{}
for _, name := range blist.Latest {
chunkMapV4[name] = nil
}
}
if err := st.IterateObjects(storageDir, func(fpath string, obj storage.Object) error {
baseName := filepath.Base(fpath)
if !strings.HasPrefix(baseName, "block-") {
item, err := parseChunkFileItemV4(st, artifactID, fpath)
if errors.Is(err, errSkipChunkFile) {
return nil
} else if err != nil {
return fmt.Errorf("unable to parse chunk name: %v", fpath)
}
// when read chunks from storage, it only contains storage dir and basename,
// no matter the subdirectory setting in storage config
item := chunkFileItem{Path: storageDir + "/" + baseName, ArtifactID: artifactID}
var size int64
var b64chunkName string
if _, err := fmt.Sscanf(baseName, "block-%d-%d-%s", &item.RunID, &size, &b64chunkName); err != nil {
return fmt.Errorf("parse content range error: %v", err)
}
rchunkName, err := base64.URLEncoding.DecodeString(b64chunkName)
if err != nil {
return fmt.Errorf("failed to parse chunkName: %v", err)
}
chunkName := string(rchunkName)
item.End = item.Start + size - 1
if _, ok := chunkMap[chunkName]; ok {
chunkMap[chunkName] = &item
// Single chunk upload with block id
if _, ok := chunkMapV4[item.ChunkName]; ok {
chunkMapV4[item.ChunkName] = item
} else if chunkMapV4 == nil {
if chunks != nil {
return errors.New("blockmap is required for chunks > 1")
}
chunks = []*chunkFileItem{item}
}
return nil
}); err != nil {
return nil, err
}
for i, name := range blist.Latest {
chunk, ok := chunkMap[name]
if !ok || chunk.Path == "" {
return nil, fmt.Errorf("missing Chunk (%d/%d): %s", i, len(blist.Latest), name)
if blist == nil && chunks == nil {
chunkUnorderedItemsMapV3, err := listV3UnorderedChunksMapByRunID(st, runID)
if err != nil {
return nil, err
}
chunks = append(chunks, chunk)
if i > 0 {
chunk.Start = chunkMap[blist.Latest[i-1]].End + 1
chunk.End += chunk.Start
chunks = chunkUnorderedItemsMapV3[artifactID]
sort.Slice(chunks, func(i, j int) bool {
return chunks[i].Start < chunks[j].Start
})
return emptyListAsError(chunks)
}
if len(chunks) == 0 && blist != nil {
for i, name := range blist.Latest {
chunk := chunkMapV4[name]
if chunk == nil {
return nil, fmt.Errorf("missing chunk (%d/%d): %s", i, len(blist.Latest), name)
}
chunks = append(chunks, chunk)
}
}
return chunks, nil
for i, chunk := range chunks {
if i == 0 {
chunk.End += chunk.Size - 1
} else {
chunk.Start = chunkMapV4[blist.Latest[i-1]].End + 1
chunk.End = chunk.Start + chunk.Size - 1
}
}
return emptyListAsError(chunks)
}
func mergeChunksForRun(ctx *ArtifactContext, st storage.ObjectStorage, runID int64, artifactName string) error {
@@ -181,13 +267,13 @@ func mergeChunksForRun(ctx *ArtifactContext, st storage.ObjectStorage, runID int
return err
}
// read all uploading chunks from storage
chunksMap, err := listChunksByRunID(st, runID)
unorderedChunksMap, err := listV3UnorderedChunksMapByRunID(st, runID)
if err != nil {
return err
}
// range db artifacts to merge chunks
for _, art := range artifacts {
chunks, ok := chunksMap[art.ID]
chunks, ok := unorderedChunksMap[art.ID]
if !ok {
log.Debug("artifact %d chunks not found", art.ID)
continue
@@ -239,12 +325,14 @@ func mergeChunksForArtifact(ctx *ArtifactContext, chunks []*chunkFileItem, st st
}
mergedReader := io.MultiReader(readers...)
shaPrefix := "sha256:"
var hash hash.Hash
var hashSha256 hash.Hash
if strings.HasPrefix(checksum, shaPrefix) {
hash = sha256.New()
hashSha256 = sha256.New()
} else if checksum != "" {
setting.PanicInDevOrTesting("unsupported checksum format: %s, will skip the checksum verification", checksum)
}
if hash != nil {
mergedReader = io.TeeReader(mergedReader, hash)
if hashSha256 != nil {
mergedReader = io.TeeReader(mergedReader, hashSha256)
}
// if chunk is gzip, use gz as extension
@@ -274,8 +362,8 @@ func mergeChunksForArtifact(ctx *ArtifactContext, chunks []*chunkFileItem, st st
}
}()
if hash != nil {
rawChecksum := hash.Sum(nil)
if hashSha256 != nil {
rawChecksum := hashSha256.Sum(nil)
actualChecksum := hex.EncodeToString(rawChecksum)
if !strings.HasSuffix(checksum, actualChecksum) {
return fmt.Errorf("update artifact error checksum is invalid %v vs %v", checksum, actualChecksum)

View File

@@ -20,8 +20,8 @@ const (
artifactXActionsResultsMD5Header = "x-actions-results-md5"
)
// The rules are from https://github.com/actions/toolkit/blob/main/packages/artifact/src/internal/path-and-artifact-name-validation.ts#L32
var invalidArtifactNameChars = strings.Join([]string{"\\", "/", "\"", ":", "<", ">", "|", "*", "?", "\r", "\n"}, "")
// The rules are from https://github.com/actions/toolkit/blob/main/packages/artifact/src/internal/upload/path-and-artifact-name-validation.ts
const invalidArtifactNameChars = "\\/\":<>|*?\r\n"
func validateArtifactName(ctx *ArtifactContext, artifactName string) bool {
if strings.ContainsAny(artifactName, invalidArtifactNameChars) {
@@ -84,11 +84,10 @@ func parseArtifactItemPath(ctx *ArtifactContext) (string, string, bool) {
// getUploadFileSize returns the size of the file to be uploaded.
// The raw size is the size of the file as reported by the header X-TFS-FileLength.
func getUploadFileSize(ctx *ArtifactContext) (int64, int64) {
contentLength := ctx.Req.ContentLength
func getUploadFileSize(ctx *ArtifactContext) int64 {
xTfsLength, _ := strconv.ParseInt(ctx.Req.Header.Get(artifactXTfsFileLengthHeader), 10, 64)
if xTfsLength > 0 {
return xTfsLength, contentLength
return xTfsLength
}
return contentLength, contentLength
return ctx.Req.ContentLength
}

View File

@@ -90,10 +90,12 @@ import (
"crypto/sha256"
"encoding/base64"
"encoding/xml"
"errors"
"fmt"
"io"
"net/http"
"net/url"
"path"
"strconv"
"strings"
"time"
@@ -109,7 +111,7 @@ import (
"code.gitea.io/gitea/services/context"
"google.golang.org/protobuf/encoding/protojson"
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
"google.golang.org/protobuf/reflect/protoreflect"
"google.golang.org/protobuf/types/known/timestamppb"
)
@@ -157,33 +159,81 @@ func ArtifactsV4Routes(prefix string) *web.Router {
return m
}
func (r artifactV4Routes) buildSignature(endp, expires, artifactName string, taskID, artifactID int64) []byte {
func (r *artifactV4Routes) buildSignature(endpoint, expires, artifactName string, taskID, artifactID int64) []byte {
mac := hmac.New(sha256.New, setting.GetGeneralTokenSigningSecret())
mac.Write([]byte(endp))
mac.Write([]byte(endpoint))
mac.Write([]byte(expires))
mac.Write([]byte(artifactName))
fmt.Fprint(mac, taskID)
fmt.Fprint(mac, artifactID)
_, _ = fmt.Fprint(mac, taskID)
_, _ = fmt.Fprint(mac, artifactID)
return mac.Sum(nil)
}
func (r artifactV4Routes) buildArtifactURL(ctx *ArtifactContext, endp, artifactName string, taskID, artifactID int64) string {
func (r *artifactV4Routes) buildArtifactURL(ctx *ArtifactContext, endpoint, artifactName string, taskID, artifactID int64) string {
expires := time.Now().Add(60 * time.Minute).Format("2006-01-02 15:04:05.999999999 -0700 MST")
uploadURL := strings.TrimSuffix(httplib.GuessCurrentAppURL(ctx), "/") + strings.TrimSuffix(r.prefix, "/") +
"/" + endp + "?sig=" + base64.URLEncoding.EncodeToString(r.buildSignature(endp, expires, artifactName, taskID, artifactID)) + "&expires=" + url.QueryEscape(expires) + "&artifactName=" + url.QueryEscape(artifactName) + "&taskID=" + strconv.FormatInt(taskID, 10) + "&artifactID=" + strconv.FormatInt(artifactID, 10)
"/" + endpoint +
"?sig=" + base64.RawURLEncoding.EncodeToString(r.buildSignature(endpoint, expires, artifactName, taskID, artifactID)) +
"&expires=" + url.QueryEscape(expires) +
"&artifactName=" + url.QueryEscape(artifactName) +
"&taskID=" + strconv.FormatInt(taskID, 10) +
"&artifactID=" + strconv.FormatInt(artifactID, 10)
return uploadURL
}
func (r artifactV4Routes) verifySignature(ctx *ArtifactContext, endp string) (*actions.ActionTask, string, bool) {
func makeBlockFilenameV4(runID, artifactID, size int64, blockID string) string {
sizeInName := max(size, 0) // do not use "-1" in filename
return fmt.Sprintf("block-%d-%d-%d-%s", runID, artifactID, sizeInName, base64.URLEncoding.EncodeToString([]byte(blockID)))
}
var errSkipChunkFile = errors.New("skip this chunk file")
func parseChunkFileItemV4(st storage.ObjectStorage, artifactID int64, fpath string) (*chunkFileItem, error) {
baseName := path.Base(fpath)
if !strings.HasPrefix(baseName, "block-") {
return nil, errSkipChunkFile
}
var item chunkFileItem
var unusedRunID int64
var b64chunkName string
_, err := fmt.Sscanf(baseName, "block-%d-%d-%d-%s", &unusedRunID, &item.ArtifactID, &item.Size, &b64chunkName)
if err != nil {
return nil, err
}
if item.ArtifactID != artifactID {
return nil, errSkipChunkFile
}
chunkName, err := base64.URLEncoding.DecodeString(b64chunkName)
if err != nil {
return nil, err
}
item.ChunkName = string(chunkName)
item.Path = fpath
if item.Size <= 0 {
fi, err := st.Stat(item.Path)
if err != nil {
return nil, err
}
item.Size = fi.Size()
}
return &item, nil
}
func (r *artifactV4Routes) verifySignature(ctx *ArtifactContext, endp string) (*actions.ActionTask, string, bool) {
rawTaskID := ctx.Req.URL.Query().Get("taskID")
rawArtifactID := ctx.Req.URL.Query().Get("artifactID")
sig := ctx.Req.URL.Query().Get("sig")
expires := ctx.Req.URL.Query().Get("expires")
artifactName := ctx.Req.URL.Query().Get("artifactName")
dsig, _ := base64.URLEncoding.DecodeString(sig)
taskID, _ := strconv.ParseInt(rawTaskID, 10, 64)
artifactID, _ := strconv.ParseInt(rawArtifactID, 10, 64)
dsig, errSig := base64.RawURLEncoding.DecodeString(sig)
taskID, errTask := strconv.ParseInt(rawTaskID, 10, 64)
artifactID, errArtifactID := strconv.ParseInt(rawArtifactID, 10, 64)
err := errors.Join(errSig, errTask, errArtifactID)
if err != nil {
log.Error("Error decoding signature values: %v", err)
ctx.HTTPError(http.StatusBadRequest, "Error decoding signature values")
return nil, "", false
}
expecedsig := r.buildSignature(endp, expires, artifactName, taskID, artifactID)
if !hmac.Equal(dsig, expecedsig) {
log.Error("Error unauthorized")
@@ -226,7 +276,7 @@ func (r *artifactV4Routes) getArtifactByName(ctx *ArtifactContext, runID int64,
return &art, nil
}
func (r *artifactV4Routes) parseProtbufBody(ctx *ArtifactContext, req protoreflect.ProtoMessage) bool {
func (r *artifactV4Routes) parseProtobufBody(ctx *ArtifactContext, req protoreflect.ProtoMessage) bool {
body, err := io.ReadAll(ctx.Req.Body)
if err != nil {
log.Error("Error decode request body: %v", err)
@@ -242,7 +292,7 @@ func (r *artifactV4Routes) parseProtbufBody(ctx *ArtifactContext, req protorefle
return true
}
func (r *artifactV4Routes) sendProtbufBody(ctx *ArtifactContext, req protoreflect.ProtoMessage) {
func (r *artifactV4Routes) sendProtobufBody(ctx *ArtifactContext, req protoreflect.ProtoMessage) {
resp, err := protojson.Marshal(req)
if err != nil {
log.Error("Error encode response body: %v", err)
@@ -257,7 +307,7 @@ func (r *artifactV4Routes) sendProtbufBody(ctx *ArtifactContext, req protoreflec
func (r *artifactV4Routes) createArtifact(ctx *ArtifactContext) {
var req CreateArtifactRequest
if ok := r.parseProtbufBody(ctx, &req); !ok {
if ok := r.parseProtobufBody(ctx, &req); !ok {
return
}
_, _, ok := validateRunIDV4(ctx, req.WorkflowRunBackendId)
@@ -291,7 +341,7 @@ func (r *artifactV4Routes) createArtifact(ctx *ArtifactContext) {
Ok: true,
SignedUploadUrl: r.buildArtifactURL(ctx, "UploadArtifact", artifactName, ctx.ActionTask.ID, artifact.ID),
}
r.sendProtbufBody(ctx, &respData)
r.sendProtobufBody(ctx, &respData)
}
func (r *artifactV4Routes) uploadArtifact(ctx *ArtifactContext) {
@@ -303,34 +353,34 @@ func (r *artifactV4Routes) uploadArtifact(ctx *ArtifactContext) {
comp := ctx.Req.URL.Query().Get("comp")
switch comp {
case "block", "appendBlock":
blockid := ctx.Req.URL.Query().Get("blockid")
if blockid == "" {
// get artifact by name
artifact, err := r.getArtifactByName(ctx, task.Job.RunID, artifactName)
// get artifact by name
artifact, err := r.getArtifactByName(ctx, task.Job.RunID, artifactName)
if err != nil {
log.Error("Error artifact not found: %v", err)
ctx.HTTPError(http.StatusNotFound, "Error artifact not found")
return
}
blockID := ctx.Req.URL.Query().Get("blockid")
if blockID == "" {
uploadedLength, err := appendUploadChunkV3(r.fs, ctx, artifact, artifact.RunID, artifact.FileSize)
if err != nil {
log.Error("Error artifact not found: %v", err)
ctx.HTTPError(http.StatusNotFound, "Error artifact not found")
log.Error("Error appending chunk %v", err)
ctx.HTTPError(http.StatusInternalServerError, "Error appending Chunk")
return
}
_, err = appendUploadChunk(r.fs, ctx, artifact, artifact.FileSize, ctx.Req.ContentLength, artifact.RunID)
if err != nil {
log.Error("Error runner api getting task: task is not running")
ctx.HTTPError(http.StatusInternalServerError, "Error runner api getting task: task is not running")
return
}
artifact.FileCompressedSize += ctx.Req.ContentLength
artifact.FileSize += ctx.Req.ContentLength
artifact.FileCompressedSize += uploadedLength
artifact.FileSize += uploadedLength
if err := actions.UpdateArtifactByID(ctx, artifact.ID, artifact); err != nil {
log.Error("Error UpdateArtifactByID: %v", err)
ctx.HTTPError(http.StatusInternalServerError, "Error UpdateArtifactByID")
return
}
} else {
_, err := r.fs.Save(fmt.Sprintf("tmpv4%d/block-%d-%d-%s", task.Job.RunID, task.Job.RunID, ctx.Req.ContentLength, base64.URLEncoding.EncodeToString([]byte(blockid))), ctx.Req.Body, -1)
blockFilename := makeBlockFilenameV4(task.Job.RunID, artifact.ID, ctx.Req.ContentLength, blockID)
_, err := r.fs.Save(fmt.Sprintf("%s/%s", makeTmpPathNameV4(task.Job.RunID), blockFilename), ctx.Req.Body, ctx.Req.ContentLength)
if err != nil {
log.Error("Error runner api getting task: task is not running")
ctx.HTTPError(http.StatusInternalServerError, "Error runner api getting task: task is not running")
log.Error("Error uploading block blob %v", err)
ctx.HTTPError(http.StatusInternalServerError, "Error uploading block blob")
return
}
}
@@ -338,10 +388,10 @@ func (r *artifactV4Routes) uploadArtifact(ctx *ArtifactContext) {
case "blocklist":
rawArtifactID := ctx.Req.URL.Query().Get("artifactID")
artifactID, _ := strconv.ParseInt(rawArtifactID, 10, 64)
_, err := r.fs.Save(fmt.Sprintf("tmpv4%d/%d-%d-blocklist", task.Job.RunID, task.Job.RunID, artifactID), ctx.Req.Body, -1)
_, err := r.fs.Save(fmt.Sprintf("%s/%d-%d-blocklist", makeTmpPathNameV4(task.Job.RunID), task.Job.RunID, artifactID), ctx.Req.Body, -1)
if err != nil {
log.Error("Error runner api getting task: task is not running")
ctx.HTTPError(http.StatusInternalServerError, "Error runner api getting task: task is not running")
log.Error("Error uploading blocklist %v", err)
ctx.HTTPError(http.StatusInternalServerError, "Error uploading blocklist")
return
}
ctx.JSON(http.StatusCreated, "created")
@@ -357,7 +407,7 @@ type Latest struct {
}
func (r *artifactV4Routes) readBlockList(runID, artifactID int64) (*BlockList, error) {
blockListName := fmt.Sprintf("tmpv4%d/%d-%d-blocklist", runID, runID, artifactID)
blockListName := fmt.Sprintf("%s/%d-%d-blocklist", makeTmpPathNameV4(runID), runID, artifactID)
s, err := r.fs.Open(blockListName)
if err != nil {
return nil, err
@@ -367,17 +417,22 @@ func (r *artifactV4Routes) readBlockList(runID, artifactID int64) (*BlockList, e
blockList := &BlockList{}
err = xdec.Decode(blockList)
_ = s.Close()
delerr := r.fs.Delete(blockListName)
if delerr != nil {
log.Warn("Failed to delete blockList %s: %v", blockListName, delerr)
}
return blockList, err
if err != nil {
return nil, err
}
return blockList, nil
}
func (r *artifactV4Routes) finalizeArtifact(ctx *ArtifactContext) {
var req FinalizeArtifactRequest
if ok := r.parseProtbufBody(ctx, &req); !ok {
if ok := r.parseProtobufBody(ctx, &req); !ok {
return
}
_, runID, ok := validateRunIDV4(ctx, req.WorkflowRunBackendId)
@@ -394,30 +449,20 @@ func (r *artifactV4Routes) finalizeArtifact(ctx *ArtifactContext) {
}
var chunks []*chunkFileItem
blockList, err := r.readBlockList(runID, artifact.ID)
blockList, blockListErr := r.readBlockList(runID, artifact.ID)
chunks, err = listOrderedChunksForArtifact(r.fs, runID, artifact.ID, blockList)
if err != nil {
log.Warn("Failed to read BlockList, fallback to old behavior: %v", err)
chunkMap, err := listChunksByRunID(r.fs, runID)
if err != nil {
log.Error("Error merge chunks: %v", err)
ctx.HTTPError(http.StatusInternalServerError, "Error merge chunks")
return
}
chunks, ok = chunkMap[artifact.ID]
if !ok {
log.Error("Error merge chunks")
ctx.HTTPError(http.StatusInternalServerError, "Error merge chunks")
return
}
} else {
chunks, err = listChunksByRunIDV4(r.fs, runID, artifact.ID, blockList)
if err != nil {
log.Error("Error merge chunks: %v", err)
ctx.HTTPError(http.StatusInternalServerError, "Error merge chunks")
return
}
artifact.FileSize = chunks[len(chunks)-1].End + 1
artifact.FileCompressedSize = chunks[len(chunks)-1].End + 1
log.Error("Error list chunks: %v", errors.Join(blockListErr, err))
ctx.HTTPError(http.StatusInternalServerError, "Error list chunks")
return
}
artifact.FileSize = chunks[len(chunks)-1].End + 1
artifact.FileCompressedSize = chunks[len(chunks)-1].End + 1
if req.Size != artifact.FileSize {
log.Error("Error merge chunks size mismatch")
ctx.HTTPError(http.StatusInternalServerError, "Error merge chunks size mismatch")
return
}
checksum := ""
@@ -434,13 +479,13 @@ func (r *artifactV4Routes) finalizeArtifact(ctx *ArtifactContext) {
Ok: true,
ArtifactId: artifact.ID,
}
r.sendProtbufBody(ctx, &respData)
r.sendProtobufBody(ctx, &respData)
}
func (r *artifactV4Routes) listArtifacts(ctx *ArtifactContext) {
var req ListArtifactsRequest
if ok := r.parseProtbufBody(ctx, &req); !ok {
if ok := r.parseProtobufBody(ctx, &req); !ok {
return
}
_, runID, ok := validateRunIDV4(ctx, req.WorkflowRunBackendId)
@@ -485,13 +530,13 @@ func (r *artifactV4Routes) listArtifacts(ctx *ArtifactContext) {
respData := ListArtifactsResponse{
Artifacts: list,
}
r.sendProtbufBody(ctx, &respData)
r.sendProtobufBody(ctx, &respData)
}
func (r *artifactV4Routes) getSignedArtifactURL(ctx *ArtifactContext) {
var req GetSignedArtifactURLRequest
if ok := r.parseProtbufBody(ctx, &req); !ok {
if ok := r.parseProtobufBody(ctx, &req); !ok {
return
}
_, runID, ok := validateRunIDV4(ctx, req.WorkflowRunBackendId)
@@ -525,7 +570,7 @@ func (r *artifactV4Routes) getSignedArtifactURL(ctx *ArtifactContext) {
if respData.SignedUrl == "" {
respData.SignedUrl = r.buildArtifactURL(ctx, "DownloadArtifact", artifactName, ctx.ActionTask.ID, artifact.ID)
}
r.sendProtbufBody(ctx, &respData)
r.sendProtobufBody(ctx, &respData)
}
func (r *artifactV4Routes) downloadArtifact(ctx *ArtifactContext) {
@@ -555,7 +600,7 @@ func (r *artifactV4Routes) downloadArtifact(ctx *ArtifactContext) {
func (r *artifactV4Routes) deleteArtifact(ctx *ArtifactContext) {
var req DeleteArtifactRequest
if ok := r.parseProtbufBody(ctx, &req); !ok {
if ok := r.parseProtobufBody(ctx, &req); !ok {
return
}
_, runID, ok := validateRunIDV4(ctx, req.WorkflowRunBackendId)
@@ -582,5 +627,5 @@ func (r *artifactV4Routes) deleteArtifact(ctx *ArtifactContext) {
Ok: true,
ArtifactId: artifact.ID,
}
r.sendProtbufBody(ctx, &respData)
r.sendProtobufBody(ctx, &respData)
}

View File

@@ -26,9 +26,18 @@ import (
// saveAsPackageBlob creates a package blob from an upload
// The uploaded blob gets stored in a special upload version to link them to the package/image
func saveAsPackageBlob(ctx context.Context, hsr packages_module.HashedSizeReader, pci *packages_service.PackageCreationInfo) (*packages_model.PackageBlob, error) { //nolint:unparam // PackageBlob is never used
// There will be concurrent uploading for the same blob, so it needs a global lock per blob hash
func saveAsPackageBlob(ctx context.Context, hsr packages_module.HashedSizeReader, pci *packages_service.PackageCreationInfo) (*packages_model.PackageBlob, error) { //nolint:unparam //returned PackageBlob is never used
pb := packages_service.NewPackageBlob(hsr)
err := globallock.LockAndDo(ctx, "container-blob:"+pb.HashSHA256, func(ctx context.Context) error {
var err error
pb, err = saveAsPackageBlobInternal(ctx, hsr, pci, pb)
return err
})
return pb, err
}
func saveAsPackageBlobInternal(ctx context.Context, hsr packages_module.HashedSizeReader, pci *packages_service.PackageCreationInfo, pb *packages_model.PackageBlob) (*packages_model.PackageBlob, error) {
exists := false
contentStore := packages_module.NewContentStore()
@@ -67,7 +76,7 @@ func saveAsPackageBlob(ctx context.Context, hsr packages_module.HashedSizeReader
return createFileForBlob(ctx, uploadVersion, pb)
})
if err != nil {
if !exists {
if !exists && pb != nil { // pb can be nil if GetOrInsertBlob failed
if err := contentStore.Delete(packages_module.BlobHash256Key(pb.HashSHA256)); err != nil {
log.Error("Error deleting package blob from content store: %v", err)
}

View File

@@ -135,7 +135,7 @@ func GetUserOrgsPermissions(ctx *context.APIContext) {
op := api.OrganizationPermissions{}
if !organization.HasOrgOrUserVisible(ctx, o, ctx.ContextUser) {
if !organization.HasOrgOrUserVisible(ctx, o, ctx.Doer) {
ctx.APIErrorNotFound("HasOrgOrUserVisible", nil)
return
}

View File

@@ -356,7 +356,7 @@ func DeleteTime(ctx *context.APIContext) {
return
}
time, err := issues_model.GetTrackedTimeByID(ctx, ctx.PathParamInt64("id"))
time, err := issues_model.GetTrackedTimeByID(ctx, issue.ID, ctx.PathParamInt64("id"))
if err != nil {
if db.IsErrNotExist(err) {
ctx.APIErrorNotFound(err)

View File

@@ -8,6 +8,7 @@ import (
"fmt"
"net/http"
auth_model "code.gitea.io/gitea/models/auth"
"code.gitea.io/gitea/models/db"
"code.gitea.io/gitea/models/perm"
repo_model "code.gitea.io/gitea/models/repo"
@@ -21,6 +22,28 @@ import (
release_service "code.gitea.io/gitea/services/release"
)
func hasRepoWriteScope(ctx *context.APIContext) bool {
scope, ok := ctx.Data["ApiTokenScope"].(auth_model.AccessTokenScope)
if ctx.Data["IsApiToken"] != true || !ok {
return true
}
requiredScopes := auth_model.GetRequiredScopes(auth_model.Write, auth_model.AccessTokenScopeCategoryRepository)
allow, err := scope.HasScope(requiredScopes...)
if err != nil {
ctx.APIError(http.StatusForbidden, "checking scope failed: "+err.Error())
return false
}
return allow
}
func canAccessDraftRelease(ctx *context.APIContext) bool {
if !ctx.IsSigned || !ctx.Repo.CanWrite(unit.TypeReleases) {
return false
}
return hasRepoWriteScope(ctx)
}
// GetRelease get a single release of a repository
func GetRelease(ctx *context.APIContext) {
// swagger:operation GET /repos/{owner}/{repo}/releases/{id} repository repoGetRelease
@@ -62,6 +85,15 @@ func GetRelease(ctx *context.APIContext) {
return
}
if release.IsDraft { // only the users with write access can see draft releases
if !canAccessDraftRelease(ctx) {
if !ctx.Written() {
ctx.APIErrorNotFound()
}
return
}
}
if err := release.LoadAttributes(ctx); err != nil {
ctx.APIErrorInternal(err)
return
@@ -151,9 +183,13 @@ func ListReleases(ctx *context.APIContext) {
// "$ref": "#/responses/notFound"
listOptions := utils.GetListOptions(ctx)
includeDrafts := (ctx.Repo.AccessMode >= perm.AccessModeWrite || ctx.Repo.UnitAccessMode(unit.TypeReleases) >= perm.AccessModeWrite) && hasRepoWriteScope(ctx)
if ctx.Written() {
return
}
opts := repo_model.FindReleasesOptions{
ListOptions: listOptions,
IncludeDrafts: ctx.Repo.AccessMode >= perm.AccessModeWrite || ctx.Repo.UnitAccessMode(unit.TypeReleases) >= perm.AccessModeWrite,
IncludeDrafts: includeDrafts,
IncludeTags: false,
IsDraft: ctx.FormOptionalBool("draft"),
IsPreRelease: ctx.FormOptionalBool("pre-release"),

View File

@@ -34,6 +34,14 @@ func checkReleaseMatchRepo(ctx *context.APIContext, releaseID int64) bool {
ctx.APIErrorNotFound()
return false
}
if release.IsDraft {
if !canAccessDraftRelease(ctx) {
if !ctx.Written() {
ctx.APIErrorNotFound()
}
return false
}
}
return true
}
@@ -141,6 +149,14 @@ func ListReleaseAttachments(ctx *context.APIContext) {
ctx.APIErrorNotFound()
return
}
if release.IsDraft {
if !canAccessDraftRelease(ctx) {
if !ctx.Written() {
ctx.APIErrorNotFound()
}
return
}
}
if err := release.LoadAttributes(ctx); err != nil {
ctx.APIErrorInternal(err)
return

View File

@@ -149,7 +149,11 @@ func preReceiveBranch(ctx *preReceiveContext, oldCommitID, newCommitID string, r
gitRepo := ctx.Repo.GitRepo
objectFormat := ctx.Repo.GetObjectFormat()
if branchName == repo.DefaultBranch && newCommitID == objectFormat.EmptyObjectID().String() {
defaultBranch := repo.DefaultBranch
if ctx.opts.IsWiki && repo.DefaultWikiBranch != "" {
defaultBranch = repo.DefaultWikiBranch
}
if branchName == defaultBranch && newCommitID == objectFormat.EmptyObjectID().String() {
log.Warn("Forbidden: Branch: %s is the default branch in %-v and cannot be deleted", branchName, repo)
ctx.JSON(http.StatusForbidden, private.Response{
UserMsg: fmt.Sprintf("branch %s is the default branch and cannot be deleted", branchName),

View File

@@ -4,6 +4,7 @@
package auth
import (
"errors"
"fmt"
"html"
"html/template"
@@ -230,8 +231,7 @@ func AuthorizeOAuth(ctx *context.Context) {
// pkce support
switch form.CodeChallengeMethod {
case "S256":
case "plain":
case "S256", "plain":
if err := ctx.Session.Set("CodeChallengeMethod", form.CodeChallengeMethod); err != nil {
handleAuthorizeError(ctx, AuthorizeError{
ErrorCode: ErrorCodeServerError,
@@ -614,6 +614,14 @@ func handleAuthorizationCode(ctx *context.Context, form forms.AccessTokenForm, s
})
return
}
if authorizationCode.IsExpired() {
_ = authorizationCode.Invalidate(ctx)
handleAccessTokenError(ctx, oauth2_provider.AccessTokenError{
ErrorCode: oauth2_provider.AccessTokenErrorCodeInvalidGrant,
ErrorDescription: "authorization code expired",
})
return
}
// check if code verifier authorizes the client, PKCE support
if !authorizationCode.ValidateCodeChallenge(form.CodeVerifier) {
handleAccessTokenError(ctx, oauth2_provider.AccessTokenError{
@@ -632,9 +640,15 @@ func handleAuthorizationCode(ctx *context.Context, form forms.AccessTokenForm, s
}
// remove token from database to deny duplicate usage
if err := authorizationCode.Invalidate(ctx); err != nil {
errDescription := "cannot process your request"
errCode := oauth2_provider.AccessTokenErrorCodeInvalidRequest
if errors.Is(err, auth.ErrOAuth2AuthorizationCodeInvalidated) {
errDescription = "authorization code already used"
errCode = oauth2_provider.AccessTokenErrorCodeInvalidGrant
}
handleAccessTokenError(ctx, oauth2_provider.AccessTokenError{
ErrorCode: oauth2_provider.AccessTokenErrorCodeInvalidRequest,
ErrorDescription: "cannot proceed your request",
ErrorCode: errCode,
ErrorDescription: errDescription,
})
return
}

View File

@@ -60,7 +60,7 @@ func DeleteTime(c *context.Context) {
return
}
t, err := issues_model.GetTrackedTimeByID(c, c.PathParamInt64("timeid"))
t, err := issues_model.GetTrackedTimeByID(c, issue.ID, c.PathParamInt64("timeid"))
if err != nil {
if db.IsErrNotExist(err) {
c.NotFound(err)

View File

@@ -113,7 +113,12 @@ func EmailPost(ctx *context.Context) {
// Make email address primary.
if ctx.FormString("_method") == "PRIMARY" {
if err := user_model.MakeActiveEmailPrimary(ctx, ctx.FormInt64("id")); err != nil {
if err := user_model.MakeActiveEmailPrimary(ctx, ctx.Doer.ID, ctx.FormInt64("id")); err != nil {
if user_model.IsErrEmailAddressNotExist(err) {
ctx.Flash.Error(ctx.Tr("settings.email_primary_not_found"))
ctx.Redirect(setting.AppSubURL + "/user/settings/account")
return
}
ctx.ServerError("MakeEmailPrimary", err)
return
}

View File

@@ -241,9 +241,9 @@ func handlePullRequestAutoMerge(pullID int64, sha string) {
return
}
perm, err := access_model.GetUserRepoPermission(ctx, pr.HeadRepo, doer)
perm, err := access_model.GetUserRepoPermission(ctx, pr.BaseRepo, doer)
if err != nil {
log.Error("GetUserRepoPermission %-v: %v", pr.HeadRepo, err)
log.Error("GetUserRepoPermission %-v: %v", pr.BaseRepo, err)
return
}

View File

@@ -13,6 +13,7 @@ import (
access_model "code.gitea.io/gitea/models/perm/access"
repo_model "code.gitea.io/gitea/models/repo"
user_model "code.gitea.io/gitea/models/user"
"code.gitea.io/gitea/modules/cache"
"code.gitea.io/gitea/modules/label"
"code.gitea.io/gitea/modules/log"
"code.gitea.io/gitea/modules/setting"
@@ -226,7 +227,21 @@ func ToStopWatches(ctx context.Context, doer *user_model.User, sws []*issues_mod
// ToTrackedTimeList converts TrackedTimeList to API format
func ToTrackedTimeList(ctx context.Context, doer *user_model.User, tl issues_model.TrackedTimeList) api.TrackedTimeList {
result := make([]*api.TrackedTime, 0, len(tl))
permCache := cache.NewEphemeralCache()
for _, t := range tl {
// If the issue is not loaded, conservatively skip this entry to avoid bypassing permission checks.
if t.Issue == nil || t.Issue.Repo == nil {
continue
}
perm, err := cache.GetWithEphemeralCache(ctx, permCache, "repo-perm", t.Issue.RepoID, func(ctx context.Context, repoID int64) (access_model.Permission, error) {
return access_model.GetUserRepoPermission(ctx, t.Issue.Repo, doer)
})
if err != nil {
continue
}
if !perm.CanReadIssuesOrPulls(t.Issue.IsPull) {
continue
}
result = append(result, ToTrackedTime(ctx, doer, t))
}
return result

View File

@@ -18,6 +18,7 @@ import (
"code.gitea.io/gitea/modules/timeutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestLabel_ToLabel(t *testing.T) {
@@ -83,3 +84,43 @@ func TestToStopWatchesRespectsPermissions(t *testing.T) {
assert.Len(t, visibleAdmin, 2)
assert.ElementsMatch(t, []string{"repo1", "repo3"}, []string{visibleAdmin[0].RepoName, visibleAdmin[1].RepoName})
}
func TestToTrackedTime(t *testing.T) {
require.NoError(t, unittest.PrepareTestDatabase())
ctx := t.Context()
publicIssue := unittest.AssertExistsAndLoadBean(t, &issues_model.Issue{RepoID: 1})
privateIssue := unittest.AssertExistsAndLoadBean(t, &issues_model.Issue{RepoID: 3})
regularUser := unittest.AssertExistsAndLoadBean(t, &user_model.User{ID: 5})
adminUser := unittest.AssertExistsAndLoadBean(t, &user_model.User{ID: 1})
publicTrackedTime := &issues_model.TrackedTime{IssueID: publicIssue.ID, UserID: regularUser.ID, Time: 3600}
privateTrackedTime := &issues_model.TrackedTime{IssueID: privateIssue.ID, UserID: regularUser.ID, Time: 1800}
require.NoError(t, db.Insert(ctx, publicTrackedTime))
require.NoError(t, db.Insert(ctx, privateTrackedTime))
t.Run("NilIssues", func(t *testing.T) {
list := ToTrackedTimeList(ctx, regularUser, issues_model.TrackedTimeList{publicTrackedTime, privateTrackedTime})
assert.Empty(t, list)
})
t.Run("NilRepo", func(t *testing.T) {
badTrackedTime := &issues_model.TrackedTime{Issue: &issues_model.Issue{RepoID: 999999}}
visible := ToTrackedTimeList(ctx, regularUser, issues_model.TrackedTimeList{badTrackedTime})
assert.Empty(t, visible)
})
trackedTimes := issues_model.TrackedTimeList{publicTrackedTime, privateTrackedTime}
require.NoError(t, trackedTimes.LoadAttributes(ctx))
t.Run("ToRegularUser", func(t *testing.T) {
list := ToTrackedTimeList(ctx, regularUser, trackedTimes)
require.Len(t, list, 1)
assert.Equal(t, "repo1", list[0].Issue.Repo.Name)
})
t.Run("ToAdminUser", func(t *testing.T) {
list := ToTrackedTimeList(ctx, adminUser, trackedTimes)
require.Len(t, list, 2)
assert.ElementsMatch(t, []string{"repo1", "repo3"}, []string{list[0].Issue.Repo.Name, list[1].Issue.Repo.Name})
})
}

View File

@@ -27,9 +27,9 @@ type CreateRepoForm struct {
DefaultBranch string `binding:"GitRefName;MaxSize(100)"`
AutoInit bool
Gitignores string
IssueLabels string
License string
Readme string
IssueLabels string `binding:"MaxSize(255)"`
License string `binding:"MaxSize(100)"`
Readme string `binding:"MaxSize(255)"`
Template bool
RepoTemplate int64
@@ -41,7 +41,7 @@ type CreateRepoForm struct {
Labels bool
ProtectedBranch bool
ForkSingleBranch string
ForkSingleBranch string `binding:"MaxSize(255)"`
ObjectFormatName string
}

View File

@@ -288,12 +288,13 @@ func (g *RepositoryDumper) CreateLabels(_ context.Context, labels ...*base.Label
func (g *RepositoryDumper) CreateReleases(_ context.Context, releases ...*base.Release) error {
if g.opts.ReleaseAssets {
for _, release := range releases {
attachDir := filepath.Join("release_assets", release.TagName)
attachDir := filepath.Join("release_assets", uuid.New().String())
if err := os.MkdirAll(filepath.Join(g.baseDir, attachDir), os.ModePerm); err != nil {
return err
}
for _, asset := range release.Assets {
attachLocalPath := filepath.Join(attachDir, asset.Name)
// we cannot use asset.Name because it might contains special characters.
attachLocalPath := filepath.Join(attachDir, uuid.New().String())
// SECURITY: We cannot check the DownloadURL and DownloadFunc are safe here
// ... we must assume that they are safe and simply download the attachment

View File

@@ -25,6 +25,7 @@ import (
"code.gitea.io/gitea/modules/setting"
"code.gitea.io/gitea/modules/timeutil"
"code.gitea.io/gitea/modules/util"
"code.gitea.io/gitea/services/migrations"
notify_service "code.gitea.io/gitea/services/notify"
repo_service "code.gitea.io/gitea/services/repository"
)
@@ -339,7 +340,7 @@ func runSync(ctx context.Context, m *repo_model.Mirror) ([]*mirrorSyncResult, bo
if m.LFS && setting.LFS.StartServer {
log.Trace("SyncMirrors [repo: %-v]: syncing LFS objects...", m.Repo)
endpoint := lfs.DetermineEndpoint(remoteURL.String(), m.LFSEndpoint)
lfsClient := lfs.NewClient(endpoint, nil)
lfsClient := lfs.NewClient(endpoint, migrations.NewMigrationHTTPTransport())
if err = repo_module.StoreMissingLfsObjectsInRepository(ctx, m.Repo, gitRepo, lfsClient); err != nil {
log.Error("SyncMirrors [repo: %-v]: failed to synchronize LFS objects for repository: %v", m.Repo.FullName(), err)
}

View File

@@ -23,6 +23,7 @@ import (
"code.gitea.io/gitea/modules/setting"
"code.gitea.io/gitea/modules/timeutil"
"code.gitea.io/gitea/modules/util"
"code.gitea.io/gitea/services/migrations"
repo_service "code.gitea.io/gitea/services/repository"
)
@@ -146,7 +147,7 @@ func runPushSync(ctx context.Context, m *repo_model.PushMirror) error {
defer gitRepo.Close()
endpoint := lfs.DetermineEndpoint(remoteURL.String(), "")
lfsClient := lfs.NewClient(endpoint, nil)
lfsClient := lfs.NewClient(endpoint, migrations.NewMigrationHTTPTransport())
if err := pushAllLFSObjects(ctx, gitRepo, lfsClient); err != nil {
return util.SanitizeErrorCredentialURLs(err)
}

View File

@@ -63,10 +63,10 @@ func NewBlobUploader(ctx context.Context, id string) (*BlobUploader, error) {
}
return &BlobUploader{
model,
hash,
f,
false,
PackageBlobUpload: model,
MultiHasher: hash,
file: f,
reading: false,
}, nil
}

View File

@@ -6,11 +6,13 @@ package pull
import (
"context"
"code.gitea.io/gitea/models/db"
issues_model "code.gitea.io/gitea/models/issues"
repo_model "code.gitea.io/gitea/models/repo"
user_model "code.gitea.io/gitea/models/user"
"code.gitea.io/gitea/modules/gitrepo"
"code.gitea.io/gitea/modules/json"
"code.gitea.io/gitea/modules/log"
)
// getCommitIDsFromRepo get commit IDs from repo in between oldCommitID and newCommitID
@@ -53,34 +55,67 @@ func CreatePushPullComment(ctx context.Context, pusher *user_model.User, pr *iss
}
opts := &issues_model.CreateCommentOptions{
Type: issues_model.CommentTypePullRequestPush,
Doer: pusher,
Repo: pr.BaseRepo,
IsForcePush: isForcePush,
Issue: pr.Issue,
Type: issues_model.CommentTypePullRequestPush,
Doer: pusher,
Repo: pr.BaseRepo,
Issue: pr.Issue,
}
var data issues_model.PushActionContent
if opts.IsForcePush {
data.CommitIDs = []string{oldCommitID, newCommitID}
data.IsForcePush = true
} else {
data.CommitIDs, err = getCommitIDsFromRepo(ctx, pr.BaseRepo, oldCommitID, newCommitID, pr.BaseBranch)
if err != nil {
data.CommitIDs, err = getCommitIDsFromRepo(ctx, pr.BaseRepo, oldCommitID, newCommitID, pr.BaseBranch)
if err != nil {
// For force-push events, a missing/unreachable old commit should not prevent
// deleting stale push comments or creating the force-push timeline entry.
if !isForcePush {
return nil, err
}
if len(data.CommitIDs) == 0 {
return nil, nil
log.Error("getCommitIDsFromRepo: %v", err)
}
// It maybe an empty pull request. Only non-empty pull request need to create push comment
// for force push, we always need to delete the old push comment so don't return here.
if len(data.CommitIDs) == 0 && !isForcePush {
return nil, nil //nolint:nilnil // return nil because no comment needs to be created
}
return db.WithTx2(ctx, func(ctx context.Context) (*issues_model.Comment, error) {
if isForcePush {
// Push commits comment should not have history, cross references, reactions and other
// plain comment related records, so that we just need to delete the comment itself.
if _, err := db.GetEngine(ctx).Where("issue_id = ?", pr.IssueID).
And("type = ?", issues_model.CommentTypePullRequestPush).
NoAutoCondition().
Delete(new(issues_model.Comment)); err != nil {
return nil, err
}
}
}
dataJSON, err := json.Marshal(data)
if err != nil {
return nil, err
}
if len(data.CommitIDs) > 0 {
dataJSON, err := json.Marshal(data)
if err != nil {
return nil, err
}
opts.Content = string(dataJSON)
comment, err = issues_model.CreateComment(ctx, opts)
if err != nil {
return nil, err
}
}
opts.Content = string(dataJSON)
comment, err = issues_model.CreateComment(ctx, opts)
if isForcePush { // if it's a force push, we need to add a force push comment
data.CommitIDs = []string{oldCommitID, newCommitID}
data.IsForcePush = true
dataJSON, err := json.Marshal(data)
if err != nil {
return nil, err
}
opts.Content = string(dataJSON)
opts.IsForcePush = true // FIXME: it seems the field is unnecessary any more because PushActionContent includes IsForcePush field
comment, err = issues_model.CreateComment(ctx, opts)
if err != nil {
return nil, err
}
}
return comment, err
return comment, err
})
}

View File

@@ -0,0 +1,173 @@
// Copyright 2025 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package pull
import (
"testing"
issues_model "code.gitea.io/gitea/models/issues"
"code.gitea.io/gitea/models/unittest"
user_model "code.gitea.io/gitea/models/user"
"code.gitea.io/gitea/modules/gitrepo"
"code.gitea.io/gitea/modules/json"
"github.com/stretchr/testify/assert"
)
func TestCreatePushPullCommentForcePushDeletesOldComments(t *testing.T) {
t.Run("base-branch-only", func(t *testing.T) {
assert.NoError(t, unittest.PrepareTestDatabase())
pr := unittest.AssertExistsAndLoadBean(t, &issues_model.PullRequest{ID: 2})
assert.NoError(t, pr.LoadIssue(t.Context()))
assert.NoError(t, pr.LoadBaseRepo(t.Context()))
pusher := unittest.AssertExistsAndLoadBean(t, &user_model.User{ID: 1})
_, err := issues_model.CreateComment(t.Context(), &issues_model.CreateCommentOptions{
Type: issues_model.CommentTypePullRequestPush,
Doer: pusher,
Repo: pr.BaseRepo,
Issue: pr.Issue,
Content: "{}",
})
assert.NoError(t, err)
_, err = issues_model.CreateComment(t.Context(), &issues_model.CreateCommentOptions{
Type: issues_model.CommentTypePullRequestPush,
Doer: pusher,
Repo: pr.BaseRepo,
Issue: pr.Issue,
Content: "{}",
})
assert.NoError(t, err)
comments, err := issues_model.FindComments(t.Context(), &issues_model.FindCommentsOptions{
IssueID: pr.IssueID,
Type: issues_model.CommentTypePullRequestPush,
})
assert.NoError(t, err)
assert.Len(t, comments, 2)
gitRepo, err := gitrepo.OpenRepository(t.Context(), pr.BaseRepo)
assert.NoError(t, err)
defer gitRepo.Close()
headCommit, err := gitRepo.GetBranchCommit(pr.BaseBranch)
assert.NoError(t, err)
oldCommit := headCommit
if headCommit.ParentCount() > 0 {
parentCommit, err := headCommit.Parent(0)
assert.NoError(t, err)
oldCommit = parentCommit
}
comment, err := CreatePushPullComment(t.Context(), pusher, pr, oldCommit.ID.String(), headCommit.ID.String(), true)
assert.NoError(t, err)
assert.NotNil(t, comment)
var createdData issues_model.PushActionContent
assert.NoError(t, json.Unmarshal([]byte(comment.Content), &createdData))
assert.True(t, createdData.IsForcePush)
// When both commits are on the base branch, CommitsBetweenNotBase should
// typically return no commits, so only the force-push comment is expected.
commits, err := gitRepo.CommitsBetweenNotBase(headCommit, oldCommit, pr.BaseBranch)
assert.NoError(t, err)
assert.Empty(t, commits)
comments, err = issues_model.FindComments(t.Context(), &issues_model.FindCommentsOptions{
IssueID: pr.IssueID,
Type: issues_model.CommentTypePullRequestPush,
})
assert.NoError(t, err)
assert.Len(t, comments, 1)
forcePushCount := 0
for _, comment := range comments {
var pushData issues_model.PushActionContent
assert.NoError(t, json.Unmarshal([]byte(comment.Content), &pushData))
if pushData.IsForcePush {
forcePushCount++
}
}
assert.Equal(t, 1, forcePushCount)
})
t.Run("head-vs-base-branch", func(t *testing.T) {
assert.NoError(t, unittest.PrepareTestDatabase())
pr := unittest.AssertExistsAndLoadBean(t, &issues_model.PullRequest{ID: 2})
assert.NoError(t, pr.LoadIssue(t.Context()))
assert.NoError(t, pr.LoadBaseRepo(t.Context()))
pusher := unittest.AssertExistsAndLoadBean(t, &user_model.User{ID: 1})
_, err := issues_model.CreateComment(t.Context(), &issues_model.CreateCommentOptions{
Type: issues_model.CommentTypePullRequestPush,
Doer: pusher,
Repo: pr.BaseRepo,
Issue: pr.Issue,
Content: "{}",
})
assert.NoError(t, err)
_, err = issues_model.CreateComment(t.Context(), &issues_model.CreateCommentOptions{
Type: issues_model.CommentTypePullRequestPush,
Doer: pusher,
Repo: pr.BaseRepo,
Issue: pr.Issue,
Content: "{}",
})
assert.NoError(t, err)
comments, err := issues_model.FindComments(t.Context(), &issues_model.FindCommentsOptions{
IssueID: pr.IssueID,
Type: issues_model.CommentTypePullRequestPush,
})
assert.NoError(t, err)
assert.Len(t, comments, 2)
gitRepo, err := gitrepo.OpenRepository(t.Context(), pr.BaseRepo)
assert.NoError(t, err)
defer gitRepo.Close()
// In this subtest, use the head branch for the new commit and the base branch
// for the old commit so that CommitsBetweenNotBase returns non-empty results.
headCommit, err := gitRepo.GetBranchCommit(pr.HeadBranch)
assert.NoError(t, err)
baseCommit, err := gitRepo.GetBranchCommit(pr.BaseBranch)
assert.NoError(t, err)
oldCommit := baseCommit
comment, err := CreatePushPullComment(t.Context(), pusher, pr, oldCommit.ID.String(), headCommit.ID.String(), true)
assert.NoError(t, err)
assert.NotNil(t, comment)
var createdData issues_model.PushActionContent
assert.NoError(t, json.Unmarshal([]byte(comment.Content), &createdData))
assert.True(t, createdData.IsForcePush)
commits, err := gitRepo.CommitsBetweenNotBase(headCommit, oldCommit, pr.BaseBranch)
assert.NoError(t, err)
// For this scenario we expect at least one commit between head and base
// that is not on the base branch, so data.CommitIDs should be non-empty.
assert.NotEmpty(t, commits)
comments, err = issues_model.FindComments(t.Context(), &issues_model.FindCommentsOptions{
IssueID: pr.IssueID,
Type: issues_model.CommentTypePullRequestPush,
})
assert.NoError(t, err)
// Two comments should exist now: one regular push comment and one force-push comment.
assert.Len(t, comments, 2)
forcePushCount := 0
for _, comment := range comments {
var pushData issues_model.PushActionContent
assert.NoError(t, json.Unmarshal([]byte(comment.Content), &pushData))
if pushData.IsForcePush {
forcePushCount++
}
}
assert.Equal(t, 1, forcePushCount)
})
}

View File

@@ -96,78 +96,105 @@ func Update(ctx context.Context, pr *issues_model.PullRequest, doer *user_model.
return err
}
// IsUserAllowedToUpdate check if user is allowed to update PR with given permissions and branch protections
// update PR means send new commits to PR head branch from base branch
func IsUserAllowedToUpdate(ctx context.Context, pull *issues_model.PullRequest, user *user_model.User) (mergeAllowed, rebaseAllowed bool, err error) {
if pull.Flow == issues_model.PullRequestFlowAGit {
return false, false, nil
}
// isUserAllowedToPushOrForcePushInRepoBranch checks whether user is allowed to push or force push in the given repo and branch
// it will check both user permission and branch protection rules
func isUserAllowedToPushOrForcePushInRepoBranch(ctx context.Context, user *user_model.User, repo *repo_model.Repository, branch string) (pushAllowed, forcePushAllowed bool, err error) {
if user == nil {
return false, false, nil
}
headRepoPerm, err := access_model.GetUserRepoPermission(ctx, pull.HeadRepo, user)
// 1. check user push permission on the given repository
repoPerm, err := access_model.GetUserRepoPermission(ctx, repo, user)
if err != nil {
if repo_model.IsErrUnitTypeNotExist(err) {
return false, false, nil
}
return false, false, err
}
pushAllowed = repoPerm.CanWrite(unit.TypeCode)
forcePushAllowed = pushAllowed
// 2. check branch protection whether user can push or force push
pb, err := git_model.GetFirstMatchProtectedBranchRule(ctx, repo.ID, branch)
if err != nil {
return false, false, err
}
if pb != nil { // override previous results if there is a branch protection rule
pb.Repo = repo
pushAllowed = pb.CanUserPush(ctx, user)
forcePushAllowed = pb.CanUserForcePush(ctx, user)
}
return pushAllowed, forcePushAllowed, nil
}
// IsUserAllowedToUpdate check if user is allowed to update PR with given permissions and branch protections
// update PR means send new commits to PR head branch from base branch
func IsUserAllowedToUpdate(ctx context.Context, pull *issues_model.PullRequest, user *user_model.User) (pushAllowed, rebaseAllowed bool, err error) {
if user == nil {
return false, false, nil
}
if err := pull.LoadBaseRepo(ctx); err != nil {
return false, false, err
}
if err := pull.LoadHeadRepo(ctx); err != nil {
return false, false, err
}
// 1. check base repository's AllowRebaseUpdate configuration
// 1. check whether pull request enabled.
prBaseUnit, err := pull.BaseRepo.GetUnit(ctx, unit.TypePullRequests)
if repo_model.IsErrUnitTypeNotExist(err) {
return false, false, nil // the PR unit is disabled in base repo means no update allowed
} else if err != nil {
return false, false, fmt.Errorf("get base repo unit: %v", err)
}
// 2. only support Github style pull request
if pull.Flow == issues_model.PullRequestFlowAGit {
return false, false, nil
}
// 3. check user push permission on head repository
pushAllowed, rebaseAllowed, err = isUserAllowedToPushOrForcePushInRepoBranch(ctx, user, pull.HeadRepo, pull.HeadBranch)
if err != nil {
return false, false, err
}
// 4. if the pull creator allows maintainer to edit, we need to check whether
// user is a maintainer (has permission to merge into base branch) and inherit pull request poster's permission
if pull.AllowMaintainerEdit && (!pushAllowed || !rebaseAllowed) {
baseRepoPerm, err := access_model.GetUserRepoPermission(ctx, pull.BaseRepo, user)
if err != nil {
return false, false, err
}
userAllowedToMergePR, err := isUserAllowedToMergeInRepoBranch(ctx, pull.BaseRepoID, pull.BaseBranch, baseRepoPerm, user)
if err != nil {
return false, false, err
}
if userAllowedToMergePR {
// the user is maintainer (can merge PR), and this PR is allowed to be edited by maintainers,
// then the user should inherit the PR poster's push/rebase permission for the head branch
if err := pull.LoadIssue(ctx); err != nil {
return false, false, err
}
if err := pull.Issue.LoadPoster(ctx); err != nil {
return false, false, err
}
posterPushAllowed, posterRebaseAllowed, err := isUserAllowedToPushOrForcePushInRepoBranch(ctx, pull.Issue.Poster, pull.HeadRepo, pull.HeadBranch)
if err != nil {
return false, false, err
}
if !pushAllowed {
pushAllowed = posterPushAllowed
}
if !rebaseAllowed {
rebaseAllowed = posterRebaseAllowed
}
}
}
// 5. check base repository's AllowRebaseUpdate configuration
// it is a config in base repo but controls the head (fork) repo's "Update" behavior
{
prBaseUnit, err := pull.BaseRepo.GetUnit(ctx, unit.TypePullRequests)
if repo_model.IsErrUnitTypeNotExist(err) {
return false, false, nil // the PR unit is disabled in base repo
} else if err != nil {
return false, false, fmt.Errorf("get base repo unit: %v", err)
}
rebaseAllowed = prBaseUnit.PullRequestsConfig().AllowRebaseUpdate
}
// 2. check head branch protection whether rebase is allowed, if pb not found then rebase depends on the above setting
{
pb, err := git_model.GetFirstMatchProtectedBranchRule(ctx, pull.HeadRepoID, pull.HeadBranch)
if err != nil {
return false, false, err
}
// If branch protected, disable rebase unless user is whitelisted to force push (which extends regular push)
if pb != nil {
pb.Repo = pull.HeadRepo
rebaseAllowed = rebaseAllowed && pb.CanUserForcePush(ctx, user)
}
}
// 3. check whether user has write access to head branch
baseRepoPerm, err := access_model.GetUserRepoPermission(ctx, pull.BaseRepo, user)
if err != nil {
return false, false, err
}
mergeAllowed, err = isUserAllowedToMergeInRepoBranch(ctx, pull.HeadRepoID, pull.HeadBranch, headRepoPerm, user)
if err != nil {
return false, false, err
}
// 4. if the pull creator allows maintainer to edit, it means the write permissions of the head branch has been
// granted to the user with write permission of the base repository
if pull.AllowMaintainerEdit {
mergeAllowedMaintainer, err := isUserAllowedToMergeInRepoBranch(ctx, pull.BaseRepoID, pull.BaseBranch, baseRepoPerm, user)
if err != nil {
return false, false, err
}
mergeAllowed = mergeAllowed || mergeAllowedMaintainer
}
// if merge is not allowed, rebase is also not allowed
rebaseAllowed = rebaseAllowed && mergeAllowed
return mergeAllowed, rebaseAllowed, nil
return pushAllowed, rebaseAllowed && prBaseUnit.PullRequestsConfig().AllowRebaseUpdate, nil
}
// GetDiverging determines how many commits a PR is ahead or behind the PR base branch

View File

@@ -0,0 +1,172 @@
// Copyright 2026 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package pull
import (
"testing"
"code.gitea.io/gitea/models/db"
git_model "code.gitea.io/gitea/models/git"
issues_model "code.gitea.io/gitea/models/issues"
"code.gitea.io/gitea/models/perm"
access_model "code.gitea.io/gitea/models/perm/access"
repo_model "code.gitea.io/gitea/models/repo"
"code.gitea.io/gitea/models/unit"
"code.gitea.io/gitea/models/unittest"
user_model "code.gitea.io/gitea/models/user"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestIsUserAllowedToUpdate(t *testing.T) {
require.NoError(t, unittest.PrepareTestDatabase())
setRepoAllowRebaseUpdate := func(t *testing.T, repoID int64, allow bool) {
repoUnit := unittest.AssertExistsAndLoadBean(t, &repo_model.RepoUnit{RepoID: repoID, Type: unit.TypePullRequests})
repoUnit.PullRequestsConfig().AllowRebaseUpdate = allow
require.NoError(t, repo_model.UpdateRepoUnit(t.Context(), repoUnit))
}
user2 := unittest.AssertExistsAndLoadBean(t, &user_model.User{ID: 2})
t.Run("RespectsProtectedBranch", func(t *testing.T) {
pr2 := unittest.AssertExistsAndLoadBean(t, &issues_model.PullRequest{ID: 2})
protectedBranch := &git_model.ProtectedBranch{
RepoID: pr2.HeadRepoID,
RuleName: pr2.HeadBranch,
CanPush: false,
CanForcePush: false,
}
_, err := db.GetEngine(t.Context()).Insert(protectedBranch)
require.NoError(t, err)
defer db.DeleteByBean(t.Context(), protectedBranch)
pushAllowed, rebaseAllowed, err := IsUserAllowedToUpdate(t.Context(), pr2, user2)
assert.NoError(t, err)
assert.False(t, pushAllowed)
assert.False(t, rebaseAllowed)
})
t.Run("DisallowRebaseWhenConfigDisabled", func(t *testing.T) {
pr2 := unittest.AssertExistsAndLoadBean(t, &issues_model.PullRequest{ID: 2})
setRepoAllowRebaseUpdate(t, pr2.BaseRepoID, false)
pushAllowed, rebaseAllowed, err := IsUserAllowedToUpdate(t.Context(), pr2, user2)
assert.NoError(t, err)
assert.True(t, pushAllowed)
assert.False(t, rebaseAllowed)
})
t.Run("ReadOnlyAccessDenied", func(t *testing.T) {
pr2 := unittest.AssertExistsAndLoadBean(t, &issues_model.PullRequest{ID: 2})
user4 := unittest.AssertExistsAndLoadBean(t, &user_model.User{ID: 4})
collaboration := &repo_model.Collaboration{
RepoID: pr2.HeadRepoID,
UserID: user4.ID,
Mode: perm.AccessModeRead,
}
require.NoError(t, db.Insert(t.Context(), collaboration))
defer db.DeleteByBean(t.Context(), collaboration)
require.NoError(t, pr2.LoadHeadRepo(t.Context()))
assert.NoError(t, access_model.RecalculateUserAccess(t.Context(), pr2.HeadRepo, user4.ID))
pushAllowed, rebaseAllowed, err := IsUserAllowedToUpdate(t.Context(), pr2, user4)
assert.NoError(t, err)
assert.False(t, pushAllowed)
assert.False(t, rebaseAllowed)
})
t.Run("ProtectedBranchAllowsPushWithoutRebase", func(t *testing.T) {
pr2 := unittest.AssertExistsAndLoadBean(t, &issues_model.PullRequest{ID: 2})
protectedBranch := &git_model.ProtectedBranch{
RepoID: pr2.HeadRepoID,
RuleName: pr2.HeadBranch,
CanPush: true,
CanForcePush: false,
}
_, err := db.GetEngine(t.Context()).Insert(protectedBranch)
require.NoError(t, err)
defer db.DeleteByBean(t.Context(), protectedBranch)
pushAllowed, rebaseAllowed, err := IsUserAllowedToUpdate(t.Context(), pr2, user2)
assert.NoError(t, err)
assert.True(t, pushAllowed)
assert.False(t, rebaseAllowed)
})
pr3Poster := unittest.AssertExistsAndLoadBean(t, &user_model.User{ID: 12})
t.Run("MaintainerEditRespectsPosterPermissions", func(t *testing.T) {
pr3 := unittest.AssertExistsAndLoadBean(t, &issues_model.PullRequest{ID: 3})
pr3.AllowMaintainerEdit = true
pushAllowed, rebaseAllowed, err := IsUserAllowedToUpdate(t.Context(), pr3, pr3Poster)
assert.NoError(t, err)
assert.False(t, pushAllowed)
assert.False(t, rebaseAllowed)
})
t.Run("MaintainerEditInheritsPosterPermissions", func(t *testing.T) {
pr3 := unittest.AssertExistsAndLoadBean(t, &issues_model.PullRequest{ID: 3})
pr3.AllowMaintainerEdit = true
protectedBranch := &git_model.ProtectedBranch{
RepoID: pr3.HeadRepoID,
RuleName: pr3.HeadBranch,
CanPush: true,
CanForcePush: true,
}
_, err := db.GetEngine(t.Context()).Insert(protectedBranch)
require.NoError(t, err)
defer db.DeleteByBean(t.Context(), protectedBranch)
collaboration := &repo_model.Collaboration{
RepoID: pr3.HeadRepoID,
UserID: pr3Poster.ID,
Mode: perm.AccessModeWrite,
}
require.NoError(t, db.Insert(t.Context(), collaboration))
defer db.DeleteByBean(t.Context(), collaboration)
require.NoError(t, pr3.LoadHeadRepo(t.Context()))
assert.NoError(t, access_model.RecalculateUserAccess(t.Context(), pr3.HeadRepo, pr3Poster.ID))
pushAllowed, rebaseAllowed, err := IsUserAllowedToUpdate(t.Context(), pr3, pr3Poster)
assert.NoError(t, err)
assert.True(t, pushAllowed)
assert.True(t, rebaseAllowed)
})
t.Run("MaintainerEditInheritsPosterPermissionsRebaseDisabled", func(t *testing.T) {
pr3 := unittest.AssertExistsAndLoadBean(t, &issues_model.PullRequest{ID: 3})
pr3.AllowMaintainerEdit = true
protectedBranch := &git_model.ProtectedBranch{
RepoID: pr3.HeadRepoID,
RuleName: pr3.HeadBranch,
CanPush: true,
CanForcePush: true,
}
_, err := db.GetEngine(t.Context()).Insert(protectedBranch)
require.NoError(t, err)
defer db.DeleteByBean(t.Context(), protectedBranch)
collaboration := &repo_model.Collaboration{
RepoID: pr3.HeadRepoID,
UserID: pr3Poster.ID,
Mode: perm.AccessModeWrite,
}
require.NoError(t, db.Insert(t.Context(), collaboration))
defer db.DeleteByBean(t.Context(), collaboration)
require.NoError(t, pr3.LoadHeadRepo(t.Context()))
assert.NoError(t, access_model.RecalculateUserAccess(t.Context(), pr3.HeadRepo, pr3Poster.ID))
setRepoAllowRebaseUpdate(t, pr3.BaseRepoID, false)
pushAllowed, rebaseAllowed, err := IsUserAllowedToUpdate(t.Context(), pr3, pr3Poster)
assert.NoError(t, err)
assert.True(t, pushAllowed)
assert.False(t, rebaseAllowed)
})
}

View File

@@ -229,6 +229,9 @@ func CreateRepositoryDirectly(ctx context.Context, doer, owner *user_model.User,
if opts.ObjectFormatName == "" {
opts.ObjectFormatName = git.Sha1ObjectFormat.Name()
}
if opts.ObjectFormatName != git.Sha1ObjectFormat.Name() && opts.ObjectFormatName != git.Sha256ObjectFormat.Name() {
return nil, fmt.Errorf("unsupported object format: %s", opts.ObjectFormatName)
}
repo := &repo_model.Repository{
OwnerID: owner.ID,

View File

@@ -104,12 +104,12 @@ func generateExpansion(ctx context.Context, src string, templateRepo, generateRe
// giteaTemplateFileMatcher holds information about a .gitea/template file
type giteaTemplateFileMatcher struct {
LocalFullPath string
globs []glob.Glob
relPath string
globs []glob.Glob
}
func newGiteaTemplateFileMatcher(fullPath string, content []byte) *giteaTemplateFileMatcher {
gt := &giteaTemplateFileMatcher{LocalFullPath: fullPath}
func newGiteaTemplateFileMatcher(relPath string, content []byte) *giteaTemplateFileMatcher {
gt := &giteaTemplateFileMatcher{relPath: relPath}
gt.globs = make([]glob.Glob, 0)
scanner := bufio.NewScanner(bytes.NewReader(content))
for scanner.Scan() {
@@ -140,64 +140,44 @@ func (gt *giteaTemplateFileMatcher) Match(s string) bool {
return false
}
func readLocalTmpRepoFileContent(localPath string, limit int) ([]byte, error) {
ok, err := util.IsRegularFile(localPath)
if err != nil {
return nil, err
} else if !ok {
return nil, fs.ErrNotExist
}
f, err := os.Open(localPath)
if err != nil {
return nil, err
}
defer f.Close()
return util.ReadWithLimit(f, limit)
}
func readGiteaTemplateFile(tmpDir string) (*giteaTemplateFileMatcher, error) {
localPath := filepath.Join(tmpDir, ".gitea", "template")
content, err := readLocalTmpRepoFileContent(localPath, 1024*1024)
templateRelPath := filepath.Join(".gitea", "template")
content, err := util.ReadRegularPathFile(tmpDir, templateRelPath, 1024*1024)
if err != nil {
return nil, err
return nil, util.Iif(errors.Is(err, util.ErrNotRegularPathFile), os.ErrNotExist, err)
}
return newGiteaTemplateFileMatcher(localPath, content), nil
return newGiteaTemplateFileMatcher(templateRelPath, content), nil
}
func substGiteaTemplateFile(ctx context.Context, tmpDir, tmpDirSubPath string, templateRepo, generateRepo *repo_model.Repository) error {
tmpFullPath := filepath.Join(tmpDir, tmpDirSubPath)
content, err := readLocalTmpRepoFileContent(tmpFullPath, 1024*1024)
content, err := util.ReadRegularPathFile(tmpDir, tmpDirSubPath, 1024*1024)
if err != nil {
return util.Iif(errors.Is(err, fs.ErrNotExist), nil, err)
if errors.Is(err, fs.ErrNotExist) {
return nil
}
return err
}
if err := util.Remove(tmpFullPath); err != nil {
if err := os.Remove(util.FilePathJoinAbs(tmpDir, tmpDirSubPath)); err != nil {
return err
}
generatedContent := generateExpansion(ctx, string(content), templateRepo, generateRepo)
substSubPath := filePathSanitize(generateExpansion(ctx, tmpDirSubPath, templateRepo, generateRepo))
newLocalPath := filepath.Join(tmpDir, substSubPath)
regular, err := util.IsRegularFile(newLocalPath)
if canWrite := regular || errors.Is(err, fs.ErrNotExist); !canWrite {
return nil
}
if err := os.MkdirAll(filepath.Dir(newLocalPath), 0o755); err != nil {
return err
}
return os.WriteFile(newLocalPath, []byte(generatedContent), 0o644)
return util.WriteRegularPathFile(tmpDir, substSubPath, []byte(generatedContent), 0o755, 0o644)
}
func processGiteaTemplateFile(ctx context.Context, tmpDir string, templateRepo, generateRepo *repo_model.Repository, fileMatcher *giteaTemplateFileMatcher) error {
if err := util.Remove(fileMatcher.LocalFullPath); err != nil {
return fmt.Errorf("unable to remove .gitea/template: %w", err)
// processGiteaTemplateFile processes and removes the .gitea/template file, does variable expansion for template files
// and save the processed files to the filesystem. It returns a list of skipped files that are not regular paths.
func processGiteaTemplateFile(ctx context.Context, tmpDir string, templateRepo, generateRepo *repo_model.Repository, fileMatcher *giteaTemplateFileMatcher) (skippedFiles []string, _ error) {
// Why not use "os.Root" here: symlink is unsafe even in the same root but "os.Root" can't help, it's more difficult to use "os.Root" to do the WalkDir.
if err := os.Remove(util.FilePathJoinAbs(tmpDir, fileMatcher.relPath)); err != nil {
return nil, fmt.Errorf("unable to remove .gitea/template: %w", err)
}
if !fileMatcher.HasRules() {
return nil // Avoid walking tree if there are no globs
return skippedFiles, nil // Avoid walking tree if there are no globs
}
return filepath.WalkDir(tmpDir, func(fullPath string, d os.DirEntry, walkErr error) error {
err := filepath.WalkDir(tmpDir, func(fullPath string, d os.DirEntry, walkErr error) error {
if walkErr != nil {
return walkErr
}
@@ -209,10 +189,22 @@ func processGiteaTemplateFile(ctx context.Context, tmpDir string, templateRepo,
return err
}
if fileMatcher.Match(filepath.ToSlash(tmpDirSubPath)) {
return substGiteaTemplateFile(ctx, tmpDir, tmpDirSubPath, templateRepo, generateRepo)
err := substGiteaTemplateFile(ctx, tmpDir, tmpDirSubPath, templateRepo, generateRepo)
if errors.Is(err, util.ErrNotRegularPathFile) {
skippedFiles = append(skippedFiles, tmpDirSubPath)
} else if err != nil {
return err
}
}
return nil
}) // end: WalkDir
if err != nil {
return nil, err
}
if err = util.RemoveAll(util.FilePathJoinAbs(tmpDir, ".git")); err != nil {
return nil, err
}
return skippedFiles, nil
}
func generateRepoCommit(ctx context.Context, repo, templateRepo, generateRepo *repo_model.Repository, tmpDir string) error {
@@ -251,7 +243,7 @@ func generateRepoCommit(ctx context.Context, repo, templateRepo, generateRepo *r
// Variable expansion
fileMatcher, err := readGiteaTemplateFile(tmpDir)
if err == nil {
err = processGiteaTemplateFile(ctx, tmpDir, templateRepo, generateRepo, fileMatcher)
_, err = processGiteaTemplateFile(ctx, tmpDir, templateRepo, generateRepo, fileMatcher)
if err != nil {
return fmt.Errorf("processGiteaTemplateFile: %w", err)
}

View File

@@ -74,7 +74,7 @@ func TestFilePathSanitize(t *testing.T) {
assert.Equal(t, ".", filePathSanitize("/"))
}
func TestProcessGiteaTemplateFile(t *testing.T) {
func TestProcessGiteaTemplateFileGenerate(t *testing.T) {
tmpDir := filepath.Join(t.TempDir(), "gitea-template-test")
assertFileContent := func(path, expected string) {
@@ -97,6 +97,8 @@ func TestProcessGiteaTemplateFile(t *testing.T) {
assert.Equal(t, expected, link, "symlink target mismatch for %s", path)
}
require.NoError(t, os.MkdirAll(tmpDir+"/.git", 0o755))
require.NoError(t, os.WriteFile(tmpDir+"/.git/config", []byte("git-config-dummy"), 0o644))
require.NoError(t, os.MkdirAll(tmpDir+"/.gitea", 0o755))
require.NoError(t, os.WriteFile(tmpDir+"/.gitea/template", []byte("*\ninclude/**"), 0o644))
require.NoError(t, os.MkdirAll(tmpDir+"/sub", 0o755))
@@ -127,10 +129,20 @@ func TestProcessGiteaTemplateFile(t *testing.T) {
assertFileContent("subst-${TEMPLATE_NAME}-to-link", toLinkContent)
assertFileContent("subst-${TEMPLATE_NAME}-from-link", fromLinkContent)
}
// case-5
{
require.NoError(t, os.MkdirAll(tmpDir+"/real-dir", 0o755))
require.NoError(t, os.WriteFile(tmpDir+"/real-dir/real-file", []byte("origin content"), 0o644))
require.NoError(t, os.MkdirAll(tmpDir+"/include/subst-${TEMPLATE_NAME}-link-dir", 0o755))
require.NoError(t, os.WriteFile(tmpDir+"/include/subst-${TEMPLATE_NAME}-link-dir/real-file", []byte("template content"), 0o644))
require.NoError(t, os.Symlink(tmpDir+"/real-dir", tmpDir+"/include/subst-TemplateRepoName-link-dir"))
}
{
// will succeed
require.NoError(t, os.WriteFile(tmpDir+"/subst-${TEMPLATE_NAME}-normal", []byte("dummy subst template name normal"), 0o644))
// will skil if the path subst result is a link
// will be skipped if the path subst result is a link
require.NoError(t, os.WriteFile(tmpDir+"/subst-${TEMPLATE_NAME}-to-link", []byte("dummy subst template name to link"), 0o644))
require.NoError(t, os.Symlink(tmpDir+"/sub/link-target", tmpDir+"/subst-TemplateRepoName-to-link"))
// will be skipped since the source is a symlink
@@ -143,9 +155,20 @@ func TestProcessGiteaTemplateFile(t *testing.T) {
{
templateRepo := &repo_model.Repository{Name: "TemplateRepoName"}
generatedRepo := &repo_model.Repository{Name: "/../.gIt/name"}
assertFileContent(".git/config", "git-config-dummy")
fileMatcher, _ := readGiteaTemplateFile(tmpDir)
err := processGiteaTemplateFile(t.Context(), tmpDir, templateRepo, generatedRepo, fileMatcher)
skippedFiles, err := processGiteaTemplateFile(t.Context(), tmpDir, templateRepo, generatedRepo, fileMatcher)
require.NoError(t, err)
assert.Equal(t, []string{
"include/subst-${TEMPLATE_NAME}-link-dir/real-file",
"include/subst-TemplateRepoName-link-dir",
"link",
"subst-${TEMPLATE_NAME}-from-link",
"subst-${TEMPLATE_NAME}-to-link",
"subst-TemplateRepoName-to-link",
}, skippedFiles)
assertFileContent(".git/config", "")
assertFileContent(".gitea/template", "")
assertFileContent("include/foo/bar/test.txt", "include subdir TemplateRepoName")
}
@@ -182,32 +205,38 @@ func TestProcessGiteaTemplateFile(t *testing.T) {
assertSymLink("subst-${TEMPLATE_NAME}-from-link", tmpDir+"/sub/link-target")
}
// case-5
{
templateFilePath := tmpDir + "/.gitea/template"
_ = os.Remove(templateFilePath)
_, err := os.Lstat(templateFilePath)
require.ErrorIs(t, err, fs.ErrNotExist)
_, err = readGiteaTemplateFile(tmpDir) // no template file
require.ErrorIs(t, err, fs.ErrNotExist)
_ = os.WriteFile(templateFilePath+".target", []byte("test-data-target"), 0o644)
_ = os.Symlink(templateFilePath+".target", templateFilePath)
content, _ := os.ReadFile(templateFilePath)
require.Equal(t, "test-data-target", string(content))
_, err = readGiteaTemplateFile(tmpDir) // symlinked template file
require.ErrorIs(t, err, fs.ErrNotExist)
_ = os.Remove(templateFilePath)
_ = os.WriteFile(templateFilePath, []byte("test-data-regular"), 0o644)
content, _ = os.ReadFile(templateFilePath)
require.Equal(t, "test-data-regular", string(content))
fm, err := readGiteaTemplateFile(tmpDir) // regular template file
require.NoError(t, err)
assert.Len(t, fm.globs, 1)
assertFileContent("real-dir/real-file", "origin content")
}
}
func TestProcessGiteaTemplateFileRead(t *testing.T) {
tmpDir := t.TempDir()
_ = os.Mkdir(tmpDir+"/.gitea", 0o755)
templateFilePath := tmpDir + "/.gitea/template"
_ = os.Remove(templateFilePath)
_, err := os.Lstat(templateFilePath)
require.ErrorIs(t, err, fs.ErrNotExist)
_, err = readGiteaTemplateFile(tmpDir) // no template file
require.ErrorIs(t, err, fs.ErrNotExist)
_ = os.WriteFile(templateFilePath+".target", []byte("test-data-target"), 0o644)
_ = os.Symlink(templateFilePath+".target", templateFilePath)
content, _ := os.ReadFile(templateFilePath)
require.Equal(t, "test-data-target", string(content))
_, err = readGiteaTemplateFile(tmpDir) // symlinked template file
require.ErrorIs(t, err, fs.ErrNotExist)
_ = os.Remove(templateFilePath)
_ = os.WriteFile(templateFilePath, []byte("test-data-regular"), 0o644)
content, _ = os.ReadFile(templateFilePath)
require.Equal(t, "test-data-regular", string(content))
fm, err := readGiteaTemplateFile(tmpDir) // regular template file
require.NoError(t, err)
assert.Len(t, fm.globs, 1)
}
func TestTransformers(t *testing.T) {
cases := []struct {
name string

View File

@@ -123,10 +123,8 @@ func GarbageCollectLFSMetaObjectsForRepo(ctx context.Context, repo *repo_model.R
//
// It is likely that a week is potentially excessive but it should definitely be enough that any
// unassociated LFS object is genuinely unassociated.
OlderThan: timeutil.TimeStamp(opts.OlderThan.Unix()),
UpdatedLessRecentlyThan: timeutil.TimeStamp(opts.UpdatedLessRecentlyThan.Unix()),
OrderByUpdated: true,
LoopFunctionAlwaysUpdates: true,
OlderThan: timeutil.TimeStamp(opts.OlderThan.Unix()),
UpdatedLessRecentlyThan: timeutil.TimeStamp(opts.UpdatedLessRecentlyThan.Unix()),
})
if err == errStop {

View File

@@ -14,6 +14,7 @@ import (
"code.gitea.io/gitea/modules/lfs"
"code.gitea.io/gitea/modules/setting"
"code.gitea.io/gitea/modules/storage"
"code.gitea.io/gitea/modules/test"
repo_service "code.gitea.io/gitea/services/repository"
"github.com/stretchr/testify/assert"
@@ -22,7 +23,8 @@ import (
func TestGarbageCollectLFSMetaObjects(t *testing.T) {
unittest.PrepareTestEnv(t)
setting.LFS.StartServer = true
defer test.MockVariableValue(&setting.LFS.StartServer, true)()
err := storage.Init()
assert.NoError(t, err)
@@ -46,6 +48,32 @@ func TestGarbageCollectLFSMetaObjects(t *testing.T) {
assert.ErrorIs(t, err, git_model.ErrLFSObjectNotExist)
}
func TestGarbageCollectLFSMetaObjectsForRepoAutoFix(t *testing.T) {
unittest.PrepareTestEnv(t)
defer test.MockVariableValue(&setting.LFS.StartServer, true)()
err := storage.Init()
assert.NoError(t, err)
repo := unittest.AssertExistsAndLoadBean(t, &repo_model.Repository{ID: 1})
// add lfs object
lfsContent := []byte("gitea2")
lfsOid := storeObjectInRepo(t, repo.ID, &lfsContent)
err = repo_service.GarbageCollectLFSMetaObjectsForRepo(t.Context(), repo, repo_service.GarbageCollectLFSMetaObjectsOptions{
LogDetail: func(string, ...any) {},
AutoFix: true,
OlderThan: time.Now().Add(24 * time.Hour * 7),
UpdatedLessRecentlyThan: time.Now().Add(24 * time.Hour * 3),
})
assert.NoError(t, err)
_, err = git_model.GetLFSMetaObjectByOid(t.Context(), repo.ID, lfsOid)
assert.ErrorIs(t, err, git_model.ErrLFSObjectNotExist)
}
func storeObjectInRepo(t *testing.T, repositoryID int64, content *[]byte) string {
pointer, err := lfs.GeneratePointer(bytes.NewReader(*content))
assert.NoError(t, err)

View File

@@ -14,6 +14,12 @@
dnf config-manager --add-repo <origin-url data-url="{{AppSubUrl}}/api/packages/{{$.PackageDescriptor.Owner.Name}}/rpm{{$group}}.repo"></origin-url>
{{- end}}
# Fedora 41+ (DNF5)
{{- range $group := .Groups}}
{{- if $group}}{{$group = print "/" $group}}{{end}}
dnf config-manager addrepo --from-repofile=<origin-url data-url="{{AppSubUrl}}/api/packages/{{$.PackageDescriptor.Owner.Name}}/rpm{{$group}}.repo"></origin-url>
{{- end}}
# {{ctx.Locale.Tr "packages.rpm.distros.suse"}}
{{- range $group := .Groups}}
{{- if $group}}{{$group = print "/" $group}}{{end}}

View File

@@ -6,7 +6,7 @@
{{range $recentBranch := $data.RecentlyPushedNewBranches}}
<div class="ui positive message flex-text-block">
<div class="tw-flex-1">
{{$timeSince := DateUtils.TimeSince $recentBranch.CommitTime}}
{{$timeSince := DateUtils.TimeSince $recentBranch.PushedTime}}
{{$branchLink := HTMLFormat `<a href="%s">%s</a>` $recentBranch.BranchLink .BranchDisplayName}}
{{ctx.Locale.Tr "repo.pulls.recently_pushed_new_branches" $branchLink $timeSince}}
</div>

View File

@@ -133,11 +133,13 @@
<a class="{{if eq .SortType "leastcomment"}}active {{end}}item" href="{{QueryBuild $queryLink "sort" "leastcomment"}}">{{ctx.Locale.Tr "repo.issues.filter_sort.leastcomment"}}</a>
<a class="{{if eq .SortType "nearduedate"}}active {{end}}item" href="{{QueryBuild $queryLink "sort" "nearduedate"}}">{{ctx.Locale.Tr "repo.issues.filter_sort.nearduedate"}}</a>
<a class="{{if eq .SortType "farduedate"}}active {{end}}item" href="{{QueryBuild $queryLink "sort" "farduedate"}}">{{ctx.Locale.Tr "repo.issues.filter_sort.farduedate"}}</a>
<div class="divider"></div>
<div class="header">{{ctx.Locale.Tr "repo.issues.filter_label"}}</div>
{{range $scope := .ExclusiveLabelScopes}}
{{$sortType := (printf "scope-%s" $scope)}}
<a class="{{if eq $.SortType $sortType}}active {{end}}item" href="{{QueryBuild $queryLink "sort" $sortType}}">{{$scope}}</a>
{{if .ExclusiveLabelScopes}}
<div class="divider"></div>
<div class="header">{{ctx.Locale.Tr "repo.issues.filter_label"}}</div>
{{range $scope := .ExclusiveLabelScopes}}
{{$sortType := (printf "scope-%s" $scope)}}
<a class="{{if eq $.SortType $sortType}}active {{end}}item" href="{{QueryBuild $queryLink "sort" $sortType}}">{{$scope}}</a>
{{end}}
{{end}}
</div>
</div>

View File

@@ -1,5 +1,7 @@
{{$pageMeta := .}}
{{$data := .AssigneesData}}
{{$listBaseLink := print $pageMeta.RepoLink (Iif $pageMeta.IsPullRequest "/pulls" "/issues")}}
{{/* TODO: it seems that the code keeps checking $pageMeta.Issue and assumes that it might not exist, need to figure out why */}}
{{$issueAssignees := NIL}}{{if $pageMeta.Issue}}{{$issueAssignees = $pageMeta.Issue.Assignees}}{{end}}
<div class="divider"></div>
<div class="issue-sidebar-combo" data-selection-mode="multiple" data-update-algo="diff"
@@ -19,7 +21,7 @@
<div class="item clear-selection" data-text="">{{ctx.Locale.Tr "repo.issues.new.clear_assignees"}}</div>
<div class="divider"></div>
{{range $data.CandidateAssignees}}
<a class="item" href="#" data-value="{{.ID}}">
<a class="item" href="{{$listBaseLink}}?assignee={{.ID}}" data-value="{{.ID}}">
<span class="item-check-mark">{{svg "octicon-check"}}</span>
{{ctx.AvatarUtils.Avatar . 20}} {{template "repo/search_name" .}}
</a>
@@ -30,8 +32,8 @@
<div class="ui relaxed list muted-links flex-items-block">
<span class="item empty-list {{if $issueAssignees}}tw-hidden{{end}}">{{ctx.Locale.Tr "repo.issues.new.no_assignees"}}</span>
{{range $issueAssignees}}
<a class="item" href="{{$pageMeta.RepoLink}}/{{if $pageMeta.IsPullRequest}}pulls{{else}}issues{{end}}?assignee={{.ID}}">
{{ctx.AvatarUtils.Avatar . 20}} {{.GetDisplayName}}
<a class="item" href="{{$listBaseLink}}?assignee={{.ID}}">
{{ctx.AvatarUtils.Avatar . 20}} {{.GetDisplayName}}
</a>
{{end}}
</div>

View File

@@ -1,5 +1,6 @@
{{$pageMeta := .}}
{{$data := .LabelsData}}
{{$listBaseLink := print $pageMeta.RepoLink (Iif $pageMeta.IsPullRequest "/pulls" "/issues")}}
<div class="issue-sidebar-combo" data-selection-mode="multiple" data-update-algo="diff"
{{if $pageMeta.Issue}}data-update-url="{{$pageMeta.RepoLink}}/issues/labels?issue_ids={{$pageMeta.Issue.ID}}"{{end}}
>
@@ -26,7 +27,7 @@
<div class="divider" data-scope="{{.ExclusiveScope}}"></div>
{{end}}
{{$previousExclusiveScope = $exclusiveScope}}
{{template "repo/issue/sidebar/label_list_item" dict "Label" .}}
{{template "repo/issue/sidebar/label_list_item" dict "Label" . "LabelLink" (print $listBaseLink "?labels=" .ID)}}
{{end}}
{{if and $data.RepoLabels $data.OrgLabels}}<div class="divider"></div>{{end}}
{{$previousExclusiveScope = "_no_scope"}}
@@ -36,7 +37,7 @@
<div class="divider" data-scope="{{.ExclusiveScope}}"></div>
{{end}}
{{$previousExclusiveScope = $exclusiveScope}}
{{template "repo/issue/sidebar/label_list_item" dict "Label" .}}
{{template "repo/issue/sidebar/label_list_item" dict "Label" . "LabelLink" (print $listBaseLink "?labels=" .ID)}}
{{end}}
</div>
{{end}}
@@ -47,7 +48,7 @@
<span class="item empty-list {{if $data.SelectedLabelIDs}}tw-hidden{{end}}">{{ctx.Locale.Tr "repo.issues.new.no_label"}}</span>
{{range $data.AllLabels}}
{{if .IsChecked}}
<a class="item" href="{{$pageMeta.RepoLink}}/{{if $pageMeta.IsPullRequest}}pulls{{else}}issues{{end}}?labels={{.ID}}">
<a class="item" href="{{$listBaseLink}}?labels={{.ID}}">
{{- ctx.RenderUtils.RenderLabel . -}}
</a>
{{end}}

View File

@@ -1,5 +1,6 @@
{{$label := .Label}}
<a class="item muted {{if $label.IsChecked}}checked{{else if $label.IsArchived}}tw-hidden{{end}}" href="#"
{{$labelLink := or .LabelLink "#"}}
<a class="item muted {{if $label.IsChecked}}checked{{else if $label.IsArchived}}tw-hidden{{end}}" href="{{$labelLink}}"
data-scope="{{$label.ExclusiveScope}}" data-value="{{$label.ID}}" {{if $label.IsArchived}}data-is-archived{{end}}
>
<span class="item-check-mark">{{svg (Iif $label.ExclusiveScope "octicon-dot-fill" "octicon-check")}}</span>

View File

@@ -72,7 +72,7 @@
<td>{{if .Version}}{{.Version}}{{else}}{{ctx.Locale.Tr "unknown"}}{{end}}</td>
<td><span data-tooltip-content="{{.BelongsToOwnerName}}">{{.BelongsToOwnerType.LocaleString ctx.Locale}}</span></td>
<td>
<span class="flex-text-inline">{{range .AgentLabels}}<span class="ui label">{{.}}</span>{{end}}</span>
<span class="flex-text-inline tw-flex-wrap">{{range .AgentLabels}}<span class="ui label">{{.}}</span>{{end}}</span>
</td>
<td>{{if .LastOnline}}{{DateUtils.TimeSince .LastOnline}}{{else}}{{ctx.Locale.Tr "never"}}{{end}}</td>
<td>

View File

@@ -1 +1 @@
<a class="author text black tw-font-semibold muted"{{if gt .ID 0}} href="{{.HomeLink}}"{{end}}>{{.GetDisplayName}}</a>{{if .IsTypeBot}}<span class="ui basic label tw-p-1 tw-align-baseline">bot</span>{{end}}
<a class="author text black tw-font-semibold muted"{{if gt .ID 0}} href="{{.HomeLink}}"{{end}}>{{.GetDisplayName}}</a>{{if .IsTypeBot}}&nbsp;<span class="ui basic label tw-p-1 tw-align-baseline">bot</span>{{end}}

View File

@@ -23537,7 +23537,7 @@
"x-go-name": "Name"
},
"object_format_name": {
"description": "ObjectFormatName of the underlying git repository",
"description": "ObjectFormatName of the underlying git repository, empty string for default (sha1)",
"type": "string",
"enum": [
"sha1",

View File

@@ -6,6 +6,7 @@ package integration
import (
"bytes"
"crypto/sha256"
"encoding/base64"
"encoding/hex"
"encoding/xml"
"fmt"
@@ -22,6 +23,7 @@ import (
"code.gitea.io/gitea/modules/json"
"code.gitea.io/gitea/modules/storage"
api "code.gitea.io/gitea/modules/structs"
"code.gitea.io/gitea/modules/util"
"code.gitea.io/gitea/routers/api/actions"
actions_service "code.gitea.io/gitea/services/actions"
@@ -45,45 +47,135 @@ func TestActionsArtifactV4UploadSingleFile(t *testing.T) {
token, err := actions_service.CreateAuthorizationToken(48, 792, 193)
assert.NoError(t, err)
// acquire artifact upload url
req := NewRequestWithBody(t, "POST", "/twirp/github.actions.results.api.v1.ArtifactService/CreateArtifact", toProtoJSON(&actions.CreateArtifactRequest{
Version: 4,
Name: "artifact",
WorkflowRunBackendId: "792",
WorkflowJobRunBackendId: "193",
})).AddTokenAuth(token)
resp := MakeRequest(t, req, http.StatusOK)
var uploadResp actions.CreateArtifactResponse
protojson.Unmarshal(resp.Body.Bytes(), &uploadResp)
assert.True(t, uploadResp.Ok)
assert.Contains(t, uploadResp.SignedUploadUrl, "/twirp/github.actions.results.api.v1.ArtifactService/UploadArtifact")
table := []struct {
name string
version int32
blockID bool
noLength bool
append int
}{
{
name: "artifact",
version: 4,
},
{
name: "artifact2",
version: 4,
blockID: true,
},
{
name: "artifact3",
version: 4,
noLength: true,
},
{
name: "artifact4",
version: 4,
blockID: true,
noLength: true,
},
{
name: "artifact5",
version: 7,
blockID: true,
},
{
name: "artifact6",
version: 7,
append: 2,
noLength: true,
},
{
name: "artifact7",
version: 7,
append: 3,
blockID: true,
noLength: true,
},
{
name: "artifact8",
version: 7,
append: 4,
blockID: true,
},
}
// get upload url
idx := strings.Index(uploadResp.SignedUploadUrl, "/twirp/")
url := uploadResp.SignedUploadUrl[idx:] + "&comp=block"
for _, entry := range table {
t.Run(entry.name, func(t *testing.T) {
// acquire artifact upload url
req := NewRequestWithBody(t, "POST", "/twirp/github.actions.results.api.v1.ArtifactService/CreateArtifact", toProtoJSON(&actions.CreateArtifactRequest{
Version: entry.version,
Name: entry.name,
WorkflowRunBackendId: "792",
WorkflowJobRunBackendId: "193",
})).AddTokenAuth(token)
resp := MakeRequest(t, req, http.StatusOK)
var uploadResp actions.CreateArtifactResponse
protojson.Unmarshal(resp.Body.Bytes(), &uploadResp)
assert.True(t, uploadResp.Ok)
assert.Contains(t, uploadResp.SignedUploadUrl, "/twirp/github.actions.results.api.v1.ArtifactService/UploadArtifact")
// upload artifact chunk
body := strings.Repeat("A", 1024)
req = NewRequestWithBody(t, "PUT", url, strings.NewReader(body))
MakeRequest(t, req, http.StatusCreated)
h := sha256.New()
t.Logf("Create artifact confirm")
blocks := make([]string, 0, util.Iif(entry.blockID, entry.append+1, 0))
sha := sha256.Sum256([]byte(body))
// get upload url
idx := strings.Index(uploadResp.SignedUploadUrl, "/twirp/")
for i := range entry.append + 1 {
url := uploadResp.SignedUploadUrl[idx:]
// See https://learn.microsoft.com/en-us/rest/api/storageservices/append-block
// See https://learn.microsoft.com/en-us/rest/api/storageservices/put-block
if entry.blockID {
blockID := base64.RawURLEncoding.EncodeToString(fmt.Append([]byte("SOME_BIG_BLOCK_ID_"), i))
blocks = append(blocks, blockID)
url += "&comp=block&blockid=" + blockID
} else {
url += "&comp=appendBlock"
}
// confirm artifact upload
req = NewRequestWithBody(t, "POST", "/twirp/github.actions.results.api.v1.ArtifactService/FinalizeArtifact", toProtoJSON(&actions.FinalizeArtifactRequest{
Name: "artifact",
Size: 1024,
Hash: wrapperspb.String("sha256:" + hex.EncodeToString(sha[:])),
WorkflowRunBackendId: "792",
WorkflowJobRunBackendId: "193",
})).
AddTokenAuth(token)
resp = MakeRequest(t, req, http.StatusOK)
var finalizeResp actions.FinalizeArtifactResponse
protojson.Unmarshal(resp.Body.Bytes(), &finalizeResp)
assert.True(t, finalizeResp.Ok)
// upload artifact chunk
body := strings.Repeat("A", 1024)
_, _ = h.Write([]byte(body))
var bodyReader io.Reader = strings.NewReader(body)
if entry.noLength {
bodyReader = io.MultiReader(bodyReader)
}
req = NewRequestWithBody(t, "PUT", url, bodyReader)
MakeRequest(t, req, http.StatusCreated)
}
if entry.blockID && entry.append > 0 {
// https://learn.microsoft.com/en-us/rest/api/storageservices/put-block-list
blockListURL := uploadResp.SignedUploadUrl[idx:] + "&comp=blocklist"
// upload artifact blockList
blockList := &actions.BlockList{
Latest: blocks,
}
rawBlockList, err := xml.Marshal(blockList)
assert.NoError(t, err)
req = NewRequestWithBody(t, "PUT", blockListURL, bytes.NewReader(rawBlockList))
MakeRequest(t, req, http.StatusCreated)
}
sha := h.Sum(nil)
t.Logf("Create artifact confirm")
// confirm artifact upload
req = NewRequestWithBody(t, "POST", "/twirp/github.actions.results.api.v1.ArtifactService/FinalizeArtifact", toProtoJSON(&actions.FinalizeArtifactRequest{
Name: entry.name,
Size: int64(entry.append+1) * 1024,
Hash: wrapperspb.String("sha256:" + hex.EncodeToString(sha)),
WorkflowRunBackendId: "792",
WorkflowJobRunBackendId: "193",
})).
AddTokenAuth(token)
resp = MakeRequest(t, req, http.StatusOK)
var finalizeResp actions.FinalizeArtifactResponse
protojson.Unmarshal(resp.Body.Bytes(), &finalizeResp)
assert.True(t, finalizeResp.Ok)
})
}
}
func TestActionsArtifactV4UploadSingleFileWrongChecksum(t *testing.T) {
@@ -312,7 +404,7 @@ func TestActionsArtifactV4DownloadSingle(t *testing.T) {
token, err := actions_service.CreateAuthorizationToken(48, 792, 193)
assert.NoError(t, err)
// acquire artifact upload url
// list artifacts by name
req := NewRequestWithBody(t, "POST", "/twirp/github.actions.results.api.v1.ArtifactService/ListArtifacts", toProtoJSON(&actions.ListArtifactsRequest{
NameFilter: wrapperspb.String("artifact-v4-download"),
WorkflowRunBackendId: "792",
@@ -323,7 +415,7 @@ func TestActionsArtifactV4DownloadSingle(t *testing.T) {
protojson.Unmarshal(resp.Body.Bytes(), &listResp)
assert.Len(t, listResp.Artifacts, 1)
// confirm artifact upload
// acquire artifact download url
req = NewRequestWithBody(t, "POST", "/twirp/github.actions.results.api.v1.ArtifactService/GetSignedArtifactURL", toProtoJSON(&actions.GetSignedArtifactURLRequest{
Name: "artifact-v4-download",
WorkflowRunBackendId: "792",

View File

@@ -79,6 +79,12 @@ func TestAPIDeleteTrackedTime(t *testing.T) {
AddTokenAuth(token)
MakeRequest(t, req, http.StatusForbidden)
// Deletion should be scoped to the issue in the URL
time5 := unittest.AssertExistsAndLoadBean(t, &issues_model.TrackedTime{ID: 5})
req = NewRequestf(t, "DELETE", "/api/v1/repos/%s/%s/issues/%d/times/%d", user2.Name, issue2.Repo.Name, issue2.Index, time5.ID).
AddTokenAuth(token)
MakeRequest(t, req, http.StatusNotFound)
time3 := unittest.AssertExistsAndLoadBean(t, &issues_model.TrackedTime{ID: 3})
req = NewRequestf(t, "DELETE", "/api/v1/repos/%s/%s/issues/%d/times/%d", user2.Name, issue2.Repo.Name, issue2.Index, time3.ID).
AddTokenAuth(token)

View File

@@ -13,8 +13,11 @@ import (
"code.gitea.io/gitea/models/unittest"
user_model "code.gitea.io/gitea/models/user"
"code.gitea.io/gitea/modules/setting"
api "code.gitea.io/gitea/modules/structs"
"code.gitea.io/gitea/modules/test"
"code.gitea.io/gitea/tests"
"github.com/stretchr/testify/assert"
)
func TestAPIEditReleaseAttachmentWithUnallowedFile(t *testing.T) {
@@ -38,3 +41,36 @@ func TestAPIEditReleaseAttachmentWithUnallowedFile(t *testing.T) {
session.MakeRequest(t, req, http.StatusUnprocessableEntity)
}
func TestAPIDraftReleaseAttachmentAccess(t *testing.T) {
defer tests.PrepareTestEnv(t)()
attachment := unittest.AssertExistsAndLoadBean(t, &repo_model.Attachment{ID: 13})
release := unittest.AssertExistsAndLoadBean(t, &repo_model.Release{ID: attachment.ReleaseID})
repo := unittest.AssertExistsAndLoadBean(t, &repo_model.Repository{ID: attachment.RepoID})
repoOwner := unittest.AssertExistsAndLoadBean(t, &user_model.User{ID: repo.OwnerID})
reader := unittest.AssertExistsAndLoadBean(t, &user_model.User{ID: 1})
listURL := fmt.Sprintf("/api/v1/repos/%s/%s/releases/%d/assets", repoOwner.Name, repo.Name, release.ID)
getURL := fmt.Sprintf("/api/v1/repos/%s/%s/releases/%d/assets/%d", repoOwner.Name, repo.Name, release.ID, attachment.ID)
MakeRequest(t, NewRequest(t, "GET", listURL), http.StatusNotFound)
MakeRequest(t, NewRequest(t, "GET", getURL), http.StatusNotFound)
readerToken := getUserToken(t, reader.LowerName, auth_model.AccessTokenScopeReadRepository)
MakeRequest(t, NewRequest(t, "GET", listURL).AddTokenAuth(readerToken), http.StatusNotFound)
MakeRequest(t, NewRequest(t, "GET", getURL).AddTokenAuth(readerToken), http.StatusNotFound)
ownerReadToken := getUserToken(t, repoOwner.LowerName, auth_model.AccessTokenScopeReadRepository)
MakeRequest(t, NewRequest(t, "GET", listURL).AddTokenAuth(ownerReadToken), http.StatusNotFound)
MakeRequest(t, NewRequest(t, "GET", getURL).AddTokenAuth(ownerReadToken), http.StatusNotFound)
ownerToken := getUserToken(t, repoOwner.LowerName, auth_model.AccessTokenScopeWriteRepository)
resp := MakeRequest(t, NewRequest(t, "GET", listURL).AddTokenAuth(ownerToken), http.StatusOK)
var attachments []*api.Attachment
DecodeJSON(t, resp, &attachments)
if assert.Len(t, attachments, 1) {
assert.Equal(t, attachment.ID, attachments[0].ID)
}
MakeRequest(t, NewRequest(t, "GET", getURL).AddTokenAuth(ownerToken), http.StatusOK)
}

View File

@@ -29,12 +29,12 @@ import (
"github.com/stretchr/testify/assert"
)
func TestAPIListReleases(t *testing.T) {
func TestAPIListReleasesWithWriteToken(t *testing.T) {
defer tests.PrepareTestEnv(t)()
repo := unittest.AssertExistsAndLoadBean(t, &repo_model.Repository{ID: 1})
user2 := unittest.AssertExistsAndLoadBean(t, &user_model.User{ID: 2})
token := getUserToken(t, user2.LowerName, auth_model.AccessTokenScopeReadRepository)
token := getUserToken(t, user2.LowerName, auth_model.AccessTokenScopeWriteRepository)
link, _ := url.Parse(fmt.Sprintf("/api/v1/repos/%s/%s/releases", user2.Name, repo.Name))
resp := MakeRequest(t, NewRequest(t, "GET", link.String()).AddTokenAuth(token), http.StatusOK)
@@ -81,6 +81,76 @@ func TestAPIListReleases(t *testing.T) {
testFilterByLen(true, url.Values{"draft": {"true"}, "pre-release": {"true"}}, 0, "there is no pre-release draft")
}
func TestAPIListReleasesWithReadToken(t *testing.T) {
defer tests.PrepareTestEnv(t)()
repo := unittest.AssertExistsAndLoadBean(t, &repo_model.Repository{ID: 1})
user2 := unittest.AssertExistsAndLoadBean(t, &user_model.User{ID: 2})
token := getUserToken(t, user2.LowerName, auth_model.AccessTokenScopeReadRepository)
link, _ := url.Parse(fmt.Sprintf("/api/v1/repos/%s/%s/releases", user2.Name, repo.Name))
resp := MakeRequest(t, NewRequest(t, "GET", link.String()).AddTokenAuth(token), http.StatusOK)
var apiReleases []*api.Release
DecodeJSON(t, resp, &apiReleases)
if assert.Len(t, apiReleases, 2) {
for _, release := range apiReleases {
switch release.ID {
case 1:
assert.False(t, release.IsDraft)
assert.False(t, release.IsPrerelease)
assert.True(t, strings.HasSuffix(release.UploadURL, "/api/v1/repos/user2/repo1/releases/1/assets"), release.UploadURL)
case 5:
assert.False(t, release.IsDraft)
assert.True(t, release.IsPrerelease)
assert.True(t, strings.HasSuffix(release.UploadURL, "/api/v1/repos/user2/repo1/releases/5/assets"), release.UploadURL)
default:
assert.NoError(t, fmt.Errorf("unexpected release: %v", release))
}
}
}
// test filter
testFilterByLen := func(auth bool, query url.Values, expectedLength int, msgAndArgs ...string) {
link.RawQuery = query.Encode()
req := NewRequest(t, "GET", link.String())
if auth {
req.AddTokenAuth(token)
}
resp = MakeRequest(t, req, http.StatusOK)
DecodeJSON(t, resp, &apiReleases)
assert.Len(t, apiReleases, expectedLength, msgAndArgs)
}
testFilterByLen(false, url.Values{"draft": {"true"}}, 0, "anon should not see drafts")
testFilterByLen(true, url.Values{"draft": {"true"}}, 0, "repo owner with read token should not see drafts")
testFilterByLen(true, url.Values{"draft": {"false"}}, 2, "exclude drafts")
testFilterByLen(true, url.Values{"draft": {"false"}, "pre-release": {"false"}}, 1, "exclude drafts and pre-releases")
testFilterByLen(true, url.Values{"pre-release": {"true"}}, 1, "only get pre-release")
testFilterByLen(true, url.Values{"draft": {"true"}, "pre-release": {"true"}}, 0, "there is no pre-release draft")
}
func TestAPIGetDraftRelease(t *testing.T) {
defer tests.PrepareTestEnv(t)()
repo := unittest.AssertExistsAndLoadBean(t, &repo_model.Repository{ID: 1})
release := unittest.AssertExistsAndLoadBean(t, &repo_model.Release{ID: 4})
owner := unittest.AssertExistsAndLoadBean(t, &user_model.User{ID: repo.OwnerID})
reader := unittest.AssertExistsAndLoadBean(t, &user_model.User{ID: 1})
urlStr := fmt.Sprintf("/api/v1/repos/%s/%s/releases/%d", owner.Name, repo.Name, release.ID)
MakeRequest(t, NewRequest(t, "GET", urlStr), http.StatusNotFound)
readerToken := getUserToken(t, reader.LowerName, auth_model.AccessTokenScopeReadRepository)
MakeRequest(t, NewRequest(t, "GET", urlStr).AddTokenAuth(readerToken), http.StatusNotFound)
ownerToken := getUserToken(t, owner.LowerName, auth_model.AccessTokenScopeWriteRepository)
resp := MakeRequest(t, NewRequest(t, "GET", urlStr).AddTokenAuth(ownerToken), http.StatusOK)
var apiRelease api.Release
DecodeJSON(t, resp, &apiRelease)
assert.Equal(t, release.Title, apiRelease.Title)
}
func createNewReleaseUsingAPI(t *testing.T, token string, owner *user_model.User, repo *repo_model.Repository, name, target, title, desc string) *api.Release {
urlStr := fmt.Sprintf("/api/v1/repos/%s/%s/releases", owner.Name, repo.Name)
req := NewRequestWithJSON(t, "POST", urlStr, &api.CreateReleaseOption{

View File

@@ -15,6 +15,21 @@ import (
"github.com/stretchr/testify/assert"
)
func TestPermissionsAPI(t *testing.T) {
defer tests.PrepareTestEnv(t)()
t.Run("TokenNeeded", testTokenNeeded)
t.Run("WithOwnerUser", testWithOwnerUser)
t.Run("CanWriteUser", testCanWriteUser)
t.Run("AdminUser", testAdminUser)
t.Run("AdminCanNotCreateRepo", testAdminCanNotCreateRepo)
t.Run("CanReadUser", testCanReadUser)
t.Run("UnknownUser", testUnknownUser)
t.Run("UnknownOrganization", testUnknownOrganization)
t.Run("HiddenMemberPermissionsForbidden", testHiddenMemberPermissionsForbidden)
t.Run("PrivateOrgPermissionsNotFound", testPrivateOrgPermissionsNotFound)
}
type apiUserOrgPermTestCase struct {
LoginUser string
User string
@@ -22,16 +37,12 @@ type apiUserOrgPermTestCase struct {
ExpectedOrganizationPermissions api.OrganizationPermissions
}
func TestTokenNeeded(t *testing.T) {
defer tests.PrepareTestEnv(t)()
func testTokenNeeded(t *testing.T) {
req := NewRequest(t, "GET", "/api/v1/users/user1/orgs/org6/permissions")
MakeRequest(t, req, http.StatusUnauthorized)
}
func sampleTest(t *testing.T, auoptc apiUserOrgPermTestCase) {
defer tests.PrepareTestEnv(t)()
session := loginUser(t, auoptc.LoginUser)
token := getTokenForLoggedInUser(t, session, auth_model.AccessTokenScopeReadOrganization, auth_model.AccessTokenScopeReadUser)
@@ -48,7 +59,7 @@ func sampleTest(t *testing.T, auoptc apiUserOrgPermTestCase) {
assert.Equal(t, auoptc.ExpectedOrganizationPermissions.CanCreateRepository, apiOP.CanCreateRepository)
}
func TestWithOwnerUser(t *testing.T) {
func testWithOwnerUser(t *testing.T) {
sampleTest(t, apiUserOrgPermTestCase{
LoginUser: "user2",
User: "user2",
@@ -63,7 +74,7 @@ func TestWithOwnerUser(t *testing.T) {
})
}
func TestCanWriteUser(t *testing.T) {
func testCanWriteUser(t *testing.T) {
sampleTest(t, apiUserOrgPermTestCase{
LoginUser: "user4",
User: "user4",
@@ -78,7 +89,7 @@ func TestCanWriteUser(t *testing.T) {
})
}
func TestAdminUser(t *testing.T) {
func testAdminUser(t *testing.T) {
sampleTest(t, apiUserOrgPermTestCase{
LoginUser: "user1",
User: "user28",
@@ -93,7 +104,7 @@ func TestAdminUser(t *testing.T) {
})
}
func TestAdminCanNotCreateRepo(t *testing.T) {
func testAdminCanNotCreateRepo(t *testing.T) {
sampleTest(t, apiUserOrgPermTestCase{
LoginUser: "user1",
User: "user28",
@@ -108,7 +119,7 @@ func TestAdminCanNotCreateRepo(t *testing.T) {
})
}
func TestCanReadUser(t *testing.T) {
func testCanReadUser(t *testing.T) {
sampleTest(t, apiUserOrgPermTestCase{
LoginUser: "user1",
User: "user24",
@@ -123,9 +134,7 @@ func TestCanReadUser(t *testing.T) {
})
}
func TestUnknowUser(t *testing.T) {
defer tests.PrepareTestEnv(t)()
func testUnknownUser(t *testing.T) {
session := loginUser(t, "user1")
token := getTokenForLoggedInUser(t, session, auth_model.AccessTokenScopeReadUser, auth_model.AccessTokenScopeReadOrganization)
@@ -138,9 +147,7 @@ func TestUnknowUser(t *testing.T) {
assert.Equal(t, "user redirect does not exist [name: unknow]", apiError.Message)
}
func TestUnknowOrganization(t *testing.T) {
defer tests.PrepareTestEnv(t)()
func testUnknownOrganization(t *testing.T) {
session := loginUser(t, "user1")
token := getTokenForLoggedInUser(t, session, auth_model.AccessTokenScopeReadUser, auth_model.AccessTokenScopeReadOrganization)
@@ -151,3 +158,38 @@ func TestUnknowOrganization(t *testing.T) {
DecodeJSON(t, resp, &apiError)
assert.Equal(t, "GetUserByName", apiError.Message)
}
func testHiddenMemberPermissionsForbidden(t *testing.T) {
session := loginUser(t, "user8")
token := getTokenForLoggedInUser(t, session, auth_model.AccessTokenScopeReadUser, auth_model.AccessTokenScopeReadOrganization)
req := NewRequest(t, "GET", "/api/v1/users/user5/orgs/privated_org/permissions").
AddTokenAuth(token)
MakeRequest(t, req, http.StatusNotFound)
adminSession := loginUser(t, "user1")
adminToken := getTokenForLoggedInUser(t, adminSession, auth_model.AccessTokenScopeReadUser, auth_model.AccessTokenScopeReadOrganization)
adminReq := NewRequest(t, "GET", "/api/v1/users/user5/orgs/privated_org/permissions").
AddTokenAuth(adminToken)
resp := MakeRequest(t, adminReq, http.StatusOK)
var apiOP api.OrganizationPermissions
DecodeJSON(t, resp, &apiOP)
assert.Equal(t, api.OrganizationPermissions{
IsOwner: false,
IsAdmin: false,
CanWrite: true,
CanRead: true,
CanCreateRepository: true,
}, apiOP)
}
func testPrivateOrgPermissionsNotFound(t *testing.T) {
session := loginUser(t, "user8")
token := getTokenForLoggedInUser(t, session, auth_model.AccessTokenScopeReadUser, auth_model.AccessTokenScopeReadOrganization)
req := NewRequest(t, "GET", "/api/v1/users/user5/orgs/privated_org/permissions").
AddTokenAuth(token)
MakeRequest(t, req, http.StatusNotFound)
}

View File

@@ -347,6 +347,10 @@ func MakeRequest(t testing.TB, rw *RequestWrapper, expectedStatus int) *httptest
if req.RemoteAddr == "" {
req.RemoteAddr = "test-mock:12345"
}
// Ensure unknown contentLength is seen as -1
if req.Body != nil && req.ContentLength == 0 {
req.ContentLength = -1
}
testWebRoutes.ServeHTTP(recorder, req)
if expectedStatus != NoExpectedStatus {
if expectedStatus != recorder.Code {

View File

@@ -0,0 +1,34 @@
// Copyright 2026 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package integration
import (
"fmt"
"net/http"
"testing"
issues_model "code.gitea.io/gitea/models/issues"
"code.gitea.io/gitea/models/unittest"
"code.gitea.io/gitea/tests"
"github.com/stretchr/testify/assert"
)
func TestIssueTimeDeleteScoped(t *testing.T) {
defer tests.PrepareTestEnv(t)()
issue1 := unittest.AssertExistsAndLoadBean(t, &issues_model.Issue{ID: 1})
assert.NoError(t, issue1.LoadRepo(t.Context()))
tracked := unittest.AssertExistsAndLoadBean(t, &issues_model.TrackedTime{ID: 5})
session := loginUser(t, issue1.Repo.OwnerName)
url := fmt.Sprintf("/%s/%s/issues/%d/times/%d/delete", issue1.Repo.OwnerName, issue1.Repo.Name, issue1.Index, tracked.ID)
req := NewRequestWithValues(t, "POST", url, map[string]string{
"_csrf": GetUserCSRFToken(t, session),
})
session.MakeRequest(t, req, http.StatusNotFound)
tracked = unittest.AssertExistsAndLoadBean(t, &issues_model.TrackedTime{ID: tracked.ID})
assert.False(t, tracked.Deleted)
}

View File

@@ -20,6 +20,7 @@ import (
"code.gitea.io/gitea/services/migrations"
mirror_service "code.gitea.io/gitea/services/mirror"
repo_service "code.gitea.io/gitea/services/repository"
wiki_service "code.gitea.io/gitea/services/wiki"
"code.gitea.io/gitea/tests"
"github.com/stretchr/testify/assert"
@@ -29,6 +30,10 @@ func TestMirrorPush(t *testing.T) {
onGiteaRun(t, testMirrorPush)
}
func TestMirrorPushWikiDefaultBranchMismatch(t *testing.T) {
onGiteaRun(t, testMirrorPushWikiDefaultBranchMismatch)
}
func testMirrorPush(t *testing.T, u *url.URL) {
setting.Migrations.AllowLocalNetworks = true
assert.NoError(t, migrations.Init())
@@ -77,6 +82,45 @@ func testMirrorPush(t *testing.T, u *url.URL) {
assert.Empty(t, mirrors)
}
func testMirrorPushWikiDefaultBranchMismatch(t *testing.T, u *url.URL) {
setting.Migrations.AllowLocalNetworks = true
assert.NoError(t, migrations.Init())
_ = db.TruncateBeans(t.Context(), &repo_model.PushMirror{})
user := unittest.AssertExistsAndLoadBean(t, &user_model.User{ID: 2})
srcRepo := unittest.AssertExistsAndLoadBean(t, &repo_model.Repository{ID: 1})
mirrorRepo, err := repo_service.CreateRepositoryDirectly(t.Context(), user, user, repo_service.CreateRepoOptions{
Name: "test-push-mirror-wiki",
}, true)
assert.NoError(t, err)
assert.NoError(t, wiki_service.AddWikiPage(t.Context(), user, mirrorRepo, wiki_service.WebPath("Home"), "Mirror wiki content", "init wiki"))
mirrorRepo.DefaultBranch = "mirror-head"
assert.NoError(t, repo_model.UpdateRepositoryColsNoAutoTime(t.Context(), mirrorRepo, "default_branch"))
gitRepo, err := gitrepo.OpenRepository(t.Context(), mirrorRepo.WikiStorageRepo())
assert.NoError(t, err)
defer gitRepo.Close()
wikiCommitID, err := gitrepo.GetBranchCommitID(t.Context(), mirrorRepo.WikiStorageRepo(), mirrorRepo.DefaultWikiBranch)
assert.NoError(t, err)
assert.NoError(t, gitRepo.CreateBranch("mirror-head", wikiCommitID))
session := loginUser(t, user.Name)
pushMirrorURL := fmt.Sprintf("%s%s/%s", u.String(), url.PathEscape(user.Name), url.PathEscape(mirrorRepo.Name))
testCreatePushMirror(t, session, user.Name, srcRepo.Name, pushMirrorURL, user.LowerName, userPassword, "0")
mirrors, _, err := repo_model.GetPushMirrorsByRepoID(t.Context(), srcRepo.ID, db.ListOptions{})
assert.NoError(t, err)
assert.Len(t, mirrors, 1)
ok := mirror_service.SyncPushMirror(t.Context(), mirrors[0].ID)
assert.True(t, ok)
}
func testCreatePushMirror(t *testing.T, session *TestSession, owner, repo, address, username, password, interval string) {
req := NewRequestWithValues(t, "POST", fmt.Sprintf("/%s/%s/settings", url.PathEscape(owner), url.PathEscape(repo)), map[string]string{
"_csrf": GetUserCSRFToken(t, session),

View File

@@ -10,6 +10,7 @@ import (
"io"
"net/http"
"net/http/httptest"
"net/url"
"strings"
"testing"
@@ -95,6 +96,45 @@ func TestAuthorizeShow(t *testing.T) {
htmlDoc.GetCSRF()
}
func TestAuthorizeGrantS256RequiresVerifier(t *testing.T) {
defer tests.PrepareTestEnv(t)()
ctx := loginUser(t, "user4")
codeChallenge := "CjvyTLSdR47G5zYenDA-eDWW4lRrO8yvjcWwbD_deOg"
req := NewRequest(t, "GET", "/login/oauth/authorize?client_id=da7da3ba-9a13-4167-856f-3899de0b0138&redirect_uri=a&response_type=code&state=thestate&code_challenge_method=S256&code_challenge="+url.QueryEscape(codeChallenge))
resp := ctx.MakeRequest(t, req, http.StatusOK)
htmlDoc := NewHTMLParser(t, resp.Body)
AssertHTMLElement(t, htmlDoc, "#authorize-app", true)
grantReq := NewRequestWithValues(t, "POST", "/login/oauth/grant", map[string]string{
"client_id": "da7da3ba-9a13-4167-856f-3899de0b0138",
"state": "thestate",
"scope": "",
"nonce": "",
"redirect_uri": "a",
"granted": "true",
"_csrf": htmlDoc.GetCSRF(),
})
grantResp := ctx.MakeRequest(t, grantReq, http.StatusSeeOther)
u, err := grantResp.Result().Location()
assert.NoError(t, err)
code := u.Query().Get("code")
assert.NotEmpty(t, code)
accessReq := NewRequestWithValues(t, "POST", "/login/oauth/access_token", map[string]string{
"grant_type": "authorization_code",
"client_id": "da7da3ba-9a13-4167-856f-3899de0b0138",
"client_secret": "4MK8Na6R55smdCY0WuCCumZ6hjRPnGY5saWVRHHjJiA=",
"redirect_uri": "a",
"code": code,
})
accessResp := MakeRequest(t, accessReq, http.StatusBadRequest)
parsedError := new(oauth2_provider.AccessTokenError)
assert.NoError(t, json.Unmarshal(accessResp.Body.Bytes(), parsedError))
assert.Equal(t, "unauthorized_client", string(parsedError.ErrorCode))
assert.Equal(t, "failed PKCE code challenge", parsedError.ErrorDescription)
}
func TestAuthorizeRedirectWithExistingGrant(t *testing.T) {
defer tests.PrepareTestEnv(t)()
req := NewRequest(t, "GET", "/login/oauth/authorize?client_id=da7da3ba-9a13-4167-856f-3899de0b0138&redirect_uri=https%3A%2F%2Fexample.com%2Fxyzzy&response_type=code&state=thestate")

View File

@@ -11,7 +11,6 @@ import (
"net/url"
"os"
"path"
"path/filepath"
"strconv"
"strings"
"testing"
@@ -95,7 +94,7 @@ func TestPullMerge(t *testing.T) {
assert.NoError(t, err)
hookTasksLenBefore := len(hookTasks)
session := loginUser(t, "user1")
session := loginUser(t, "user1") // FIXME: don't use admin user for testing
testRepoFork(t, session, "user2", "repo1", "user1", "repo1", "")
testEditFile(t, session, "user1", "repo1", "master", "README.md", "Hello, World (Edited)\n")
@@ -129,7 +128,7 @@ func TestPullRebase(t *testing.T) {
assert.NoError(t, err)
hookTasksLenBefore := len(hookTasks)
session := loginUser(t, "user1")
session := loginUser(t, "user1") // FIXME: don't use admin user for testing
testRepoFork(t, session, "user2", "repo1", "user1", "repo1", "")
testEditFile(t, session, "user1", "repo1", "master", "README.md", "Hello, World (Edited)\n")
@@ -163,7 +162,7 @@ func TestPullRebaseMerge(t *testing.T) {
assert.NoError(t, err)
hookTasksLenBefore := len(hookTasks)
session := loginUser(t, "user1")
session := loginUser(t, "user1") // FIXME: don't use admin user for testing
testRepoFork(t, session, "user2", "repo1", "user1", "repo1", "")
testEditFile(t, session, "user1", "repo1", "master", "README.md", "Hello, World (Edited)\n")
@@ -197,7 +196,7 @@ func TestPullSquash(t *testing.T) {
assert.NoError(t, err)
hookTasksLenBefore := len(hookTasks)
session := loginUser(t, "user1")
session := loginUser(t, "user1") // FIXME: don't use admin user for testing
testRepoFork(t, session, "user2", "repo1", "user1", "repo1", "")
testEditFile(t, session, "user1", "repo1", "master", "README.md", "Hello, World (Edited)\n")
testEditFile(t, session, "user1", "repo1", "master", "README.md", "Hello, World (Edited!)\n")
@@ -216,7 +215,7 @@ func TestPullSquash(t *testing.T) {
func TestPullCleanUpAfterMerge(t *testing.T) {
onGiteaRun(t, func(t *testing.T, giteaURL *url.URL) {
session := loginUser(t, "user1")
session := loginUser(t, "user1") // FIXME: don't use admin user for testing
testRepoFork(t, session, "user2", "repo1", "user1", "repo1", "")
testEditFileToNewBranch(t, session, "user1", "repo1", "master", "feature/test", "README.md", "Hello, World (Edited - TestPullCleanUpAfterMerge)\n")
@@ -263,7 +262,7 @@ func TestPullCleanUpAfterMerge(t *testing.T) {
func TestCantMergeWorkInProgress(t *testing.T) {
onGiteaRun(t, func(t *testing.T, giteaURL *url.URL) {
session := loginUser(t, "user1")
session := loginUser(t, "user1") // FIXME: don't use admin user for testing
testRepoFork(t, session, "user2", "repo1", "user1", "repo1", "")
testEditFile(t, session, "user1", "repo1", "master", "README.md", "Hello, World (Edited)\n")
@@ -282,7 +281,7 @@ func TestCantMergeWorkInProgress(t *testing.T) {
func TestCantMergeConflict(t *testing.T) {
onGiteaRun(t, func(t *testing.T, giteaURL *url.URL) {
session := loginUser(t, "user1")
session := loginUser(t, "user1") // FIXME: don't use admin user for testing
testRepoFork(t, session, "user2", "repo1", "user1", "repo1", "")
testEditFileToNewBranch(t, session, "user1", "repo1", "master", "conflict", "README.md", "Hello, World (Edited Once)\n")
testEditFileToNewBranch(t, session, "user1", "repo1", "master", "base", "README.md", "Hello, World (Edited Twice)\n")
@@ -328,7 +327,7 @@ func TestCantMergeConflict(t *testing.T) {
func TestCantMergeUnrelated(t *testing.T) {
onGiteaRun(t, func(t *testing.T, giteaURL *url.URL) {
session := loginUser(t, "user1")
session := loginUser(t, "user1") // FIXME: don't use admin user for testing
testRepoFork(t, session, "user2", "repo1", "user1", "repo1", "")
testEditFileToNewBranch(t, session, "user1", "repo1", "master", "base", "README.md", "Hello, World (Edited Twice)\n")
@@ -423,7 +422,7 @@ func TestCantMergeUnrelated(t *testing.T) {
func TestFastForwardOnlyMerge(t *testing.T) {
onGiteaRun(t, func(t *testing.T, giteaURL *url.URL) {
session := loginUser(t, "user1")
session := loginUser(t, "user1") // FIXME: don't use admin user for testing
testRepoFork(t, session, "user2", "repo1", "user1", "repo1", "")
testEditFileToNewBranch(t, session, "user1", "repo1", "master", "update", "README.md", "Hello, World 2\n")
@@ -464,7 +463,7 @@ func TestFastForwardOnlyMerge(t *testing.T) {
func TestCantFastForwardOnlyMergeDiverging(t *testing.T) {
onGiteaRun(t, func(t *testing.T, giteaURL *url.URL) {
session := loginUser(t, "user1")
session := loginUser(t, "user1") // FIXME: don't use admin user for testing
testRepoFork(t, session, "user2", "repo1", "user1", "repo1", "")
testEditFileToNewBranch(t, session, "user1", "repo1", "master", "diverging", "README.md", "Hello, World diverged\n")
testEditFile(t, session, "user1", "repo1", "master", "README.md", "Hello, World 2\n")
@@ -587,7 +586,7 @@ func TestConflictChecking(t *testing.T) {
func TestPullRetargetChildOnBranchDelete(t *testing.T) {
onGiteaRun(t, func(t *testing.T, giteaURL *url.URL) {
session := loginUser(t, "user1")
session := loginUser(t, "user1") // FIXME: don't use admin user for testing
testEditFileToNewBranch(t, session, "user2", "repo1", "master", "base-pr", "README.md", "Hello, World\n(Edited - TestPullRetargetOnCleanup - base PR)\n")
testRepoFork(t, session, "user2", "repo1", "user1", "repo1", "")
testEditFileToNewBranch(t, session, "user1", "repo1", "base-pr", "child-pr", "README.md", "Hello, World\n(Edited - TestPullRetargetOnCleanup - base PR)\n(Edited - TestPullRetargetOnCleanup - child PR)")
@@ -621,7 +620,7 @@ func TestPullRetargetChildOnBranchDelete(t *testing.T) {
func TestPullDontRetargetChildOnWrongRepo(t *testing.T) {
onGiteaRun(t, func(t *testing.T, giteaURL *url.URL) {
session := loginUser(t, "user1")
session := loginUser(t, "user1") // FIXME: don't use admin user for testing
testRepoFork(t, session, "user2", "repo1", "user1", "repo1", "")
testEditFileToNewBranch(t, session, "user1", "repo1", "master", "base-pr", "README.md", "Hello, World\n(Edited - TestPullDontRetargetChildOnWrongRepo - base PR)\n")
testEditFileToNewBranch(t, session, "user1", "repo1", "base-pr", "child-pr", "README.md", "Hello, World\n(Edited - TestPullDontRetargetChildOnWrongRepo - base PR)\n(Edited - TestPullDontRetargetChildOnWrongRepo - child PR)")
@@ -680,7 +679,7 @@ func TestPullRequestMergedWithNoPermissionDeleteBranch(t *testing.T) {
func TestPullMergeIndexerNotifier(t *testing.T) {
onGiteaRun(t, func(t *testing.T, giteaURL *url.URL) {
// create a pull request
session := loginUser(t, "user1")
session := loginUser(t, "user1") // FIXME: don't use admin user for testing
testRepoFork(t, session, "user2", "repo1", "user1", "repo1", "")
testEditFile(t, session, "user1", "repo1", "master", "README.md", "Hello, World (Edited)\n")
createPullResp := testPullCreate(t, session, "user1", "repo1", false, "master", "master", "Indexer notifier test pull")
@@ -737,31 +736,13 @@ func TestPullMergeIndexerNotifier(t *testing.T) {
})
}
func testResetRepo(t *testing.T, repoPath, branch, commitID string) {
f, err := os.OpenFile(filepath.Join(repoPath, "refs", "heads", branch), os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0o644)
assert.NoError(t, err)
_, err = f.WriteString(commitID + "\n")
assert.NoError(t, err)
f.Close()
repo, err := git.OpenRepository(t.Context(), repoPath)
assert.NoError(t, err)
defer repo.Close()
id, err := repo.GetBranchCommitID(branch)
assert.NoError(t, err)
assert.Equal(t, commitID, id)
}
func TestPullAutoMergeAfterCommitStatusSucceed(t *testing.T) {
onGiteaRun(t, func(t *testing.T, giteaURL *url.URL) {
// create a pull request
session := loginUser(t, "user1")
session := loginUser(t, "user1") // FIXME: don't use admin user for testing
user1 := unittest.AssertExistsAndLoadBean(t, &user_model.User{ID: 1})
forkedName := "repo1-1"
testRepoFork(t, session, "user2", "repo1", "user1", forkedName, "")
defer func() {
testDeleteRepository(t, session, "user1", forkedName)
}()
testEditFile(t, session, "user1", forkedName, "master", "README.md", "Hello, World (Edited)\n")
testPullCreate(t, session, "user1", forkedName, false, "master", "master", "Indexer notifier test pull")
@@ -818,16 +799,10 @@ func TestPullAutoMergeAfterCommitStatusSucceed(t *testing.T) {
assert.NoError(t, err)
sha, err := baseGitRepo.GetRefCommitID(pr.GetGitHeadRefName())
assert.NoError(t, err)
masterCommitID, err := baseGitRepo.GetBranchCommitID("master")
assert.NoError(t, err)
branches, _, err := baseGitRepo.GetBranchNames(0, 100)
assert.NoError(t, err)
assert.ElementsMatch(t, []string{"sub-home-md-img-check", "home-md-img-check", "pr-to-update", "branch2", "DefaultBranch", "develop", "feature/1", "master"}, branches)
baseGitRepo.Close()
defer func() {
testResetRepo(t, baseRepo.RepoPath(), "master", masterCommitID)
}()
err = commitstatus_service.CreateCommitStatus(t.Context(), baseRepo, user1, sha, &git_model.CommitStatus{
State: commitstatus.CommitStatusSuccess,
@@ -848,18 +823,17 @@ func TestPullAutoMergeAfterCommitStatusSucceed(t *testing.T) {
func TestPullAutoMergeAfterCommitStatusSucceedAndApproval(t *testing.T) {
onGiteaRun(t, func(t *testing.T, giteaURL *url.URL) {
// create a pull request
session := loginUser(t, "user1")
user1 := unittest.AssertExistsAndLoadBean(t, &user_model.User{ID: 1})
forkedName := "repo1-2"
testRepoFork(t, session, "user2", "repo1", "user1", forkedName, "")
defer func() {
testDeleteRepository(t, session, "user1", forkedName)
}()
testEditFile(t, session, "user1", forkedName, "master", "README.md", "Hello, World (Edited)\n")
testPullCreate(t, session, "user1", forkedName, false, "master", "master", "Indexer notifier test pull")
baseUser := unittest.AssertExistsAndLoadBean(t, &user_model.User{ID: 2})
baseSession := loginUser(t, "user2")
forkUser := unittest.AssertExistsAndLoadBean(t, &user_model.User{ID: 5})
forkSession := loginUser(t, "user5")
forkedName := "repo1-fork"
testRepoFork(t, forkSession, "user2", "repo1", forkUser.Name, forkedName, "")
testEditFile(t, forkSession, forkUser.Name, forkedName, "master", "README.md", "Hello, World (Edited)\n")
testPullCreate(t, forkSession, forkUser.Name, forkedName, false, "master", "master", "Indexer notifier test pull")
baseRepo := unittest.AssertExistsAndLoadBean(t, &repo_model.Repository{OwnerName: "user2", Name: "repo1"})
forkedRepo := unittest.AssertExistsAndLoadBean(t, &repo_model.Repository{OwnerName: "user1", Name: forkedName})
forkedRepo := unittest.AssertExistsAndLoadBean(t, &repo_model.Repository{OwnerName: forkUser.Name, Name: forkedName})
pr := unittest.AssertExistsAndLoadBean(t, &issues_model.PullRequest{
BaseRepoID: baseRepo.ID,
BaseBranch: "master",
@@ -868,7 +842,7 @@ func TestPullAutoMergeAfterCommitStatusSucceedAndApproval(t *testing.T) {
})
// add protected branch for commit status
csrf := GetUserCSRFToken(t, session)
csrf := GetUserCSRFToken(t, baseSession)
// Change master branch to protected
req := NewRequestWithValues(t, "POST", "/user2/repo1/settings/branches/edit", map[string]string{
"_csrf": csrf,
@@ -878,15 +852,15 @@ func TestPullAutoMergeAfterCommitStatusSucceedAndApproval(t *testing.T) {
"status_check_contexts": "gitea/actions",
"required_approvals": "1",
})
session.MakeRequest(t, req, http.StatusSeeOther)
baseSession.MakeRequest(t, req, http.StatusSeeOther)
// first time insert automerge record, return true
scheduled, err := automerge.ScheduleAutoMerge(t.Context(), user1, pr, repo_model.MergeStyleMerge, "auto merge test", false)
scheduled, err := automerge.ScheduleAutoMerge(t.Context(), baseUser, pr, repo_model.MergeStyleMerge, "auto merge test", false)
assert.NoError(t, err)
assert.True(t, scheduled)
// second time insert automerge record, return false because it does exist
scheduled, err = automerge.ScheduleAutoMerge(t.Context(), user1, pr, repo_model.MergeStyleMerge, "auto merge test", false)
scheduled, err = automerge.ScheduleAutoMerge(t.Context(), baseUser, pr, repo_model.MergeStyleMerge, "auto merge test", false)
assert.Error(t, err)
assert.False(t, scheduled)
@@ -900,14 +874,9 @@ func TestPullAutoMergeAfterCommitStatusSucceedAndApproval(t *testing.T) {
assert.NoError(t, err)
sha, err := baseGitRepo.GetRefCommitID(pr.GetGitHeadRefName())
assert.NoError(t, err)
masterCommitID, err := baseGitRepo.GetBranchCommitID("master")
assert.NoError(t, err)
baseGitRepo.Close()
defer func() {
testResetRepo(t, baseRepo.RepoPath(), "master", masterCommitID)
}()
err = commitstatus_service.CreateCommitStatus(t.Context(), baseRepo, user1, sha, &git_model.CommitStatus{
err = commitstatus_service.CreateCommitStatus(t.Context(), baseRepo, baseUser, sha, &git_model.CommitStatus{
State: commitstatus.CommitStatusSuccess,
TargetURL: "https://gitea.com",
Context: "gitea/actions",
@@ -928,13 +897,11 @@ func TestPullAutoMergeAfterCommitStatusSucceedAndApproval(t *testing.T) {
htmlDoc := NewHTMLParser(t, resp.Body)
testSubmitReview(t, approveSession, htmlDoc.GetCSRF(), "user2", "repo1", strconv.Itoa(int(pr.Index)), sha, "approve", http.StatusOK)
time.Sleep(2 * time.Second)
// reload pr again
pr = unittest.AssertExistsAndLoadBean(t, &issues_model.PullRequest{ID: pr.ID})
assert.True(t, pr.HasMerged)
assert.Eventually(t, func() bool {
pr = unittest.AssertExistsAndLoadBean(t, &issues_model.PullRequest{ID: pr.ID})
return pr.HasMerged
}, 2*time.Second, 100*time.Millisecond)
assert.NotEmpty(t, pr.MergedCommitID)
unittest.AssertNotExistsBean(t, &pull_model.AutoMerge{PullID: pr.ID})
})
}
@@ -994,7 +961,7 @@ func TestPullAutoMergeAfterCommitStatusSucceedAndApprovalForAgitFlow(t *testing.
HeadBranch: "user2/test/head2",
})
session := loginUser(t, "user1")
session := loginUser(t, "user1") // FIXME: don't use admin user for testing
// add protected branch for commit status
csrf := GetUserCSRFToken(t, session)
// Change master branch to protected
@@ -1029,13 +996,7 @@ func TestPullAutoMergeAfterCommitStatusSucceedAndApprovalForAgitFlow(t *testing.
assert.NoError(t, err)
sha, err := baseGitRepo.GetRefCommitID(pr.GetGitHeadRefName())
assert.NoError(t, err)
masterCommitID, err := baseGitRepo.GetBranchCommitID("master")
assert.NoError(t, err)
baseGitRepo.Close()
defer func() {
testResetRepo(t, baseRepo.RepoPath(), "master", masterCommitID)
}()
err = commitstatus_service.CreateCommitStatus(t.Context(), baseRepo, user1, sha, &git_model.CommitStatus{
State: commitstatus.CommitStatusSuccess,
TargetURL: "https://gitea.com",
@@ -1049,7 +1010,7 @@ func TestPullAutoMergeAfterCommitStatusSucceedAndApprovalForAgitFlow(t *testing.
assert.Empty(t, pr.MergedCommitID)
// approve the PR from non-author
approveSession := loginUser(t, "user1")
approveSession := loginUser(t, "user1") // FIXME: don't use admin user for testing
req = NewRequest(t, "GET", fmt.Sprintf("/user2/repo1/pulls/%d", pr.Index))
resp := approveSession.MakeRequest(t, req, http.StatusOK)
htmlDoc := NewHTMLParser(t, resp.Body)
@@ -1067,11 +1028,9 @@ func TestPullAutoMergeAfterCommitStatusSucceedAndApprovalForAgitFlow(t *testing.
func TestPullNonMergeForAdminWithBranchProtection(t *testing.T) {
onGiteaRun(t, func(t *testing.T, u *url.URL) {
// create a pull request
session := loginUser(t, "user1")
session := loginUser(t, "user1") // FIXME: don't use admin user for testing
forkedName := "repo1-1"
testRepoFork(t, session, "user2", "repo1", "user1", forkedName, "")
defer testDeleteRepository(t, session, "user1", forkedName)
testEditFile(t, session, "user1", forkedName, "master", "README.md", "Hello, World (Edited)\n")
testPullCreate(t, session, "user1", forkedName, false, "master", "master", "Indexer notifier test pull")
@@ -1113,7 +1072,7 @@ func TestPullNonMergeForAdminWithBranchProtection(t *testing.T) {
func TestPullSquashMergeEmpty(t *testing.T) {
onGiteaRun(t, func(t *testing.T, u *url.URL) {
session := loginUser(t, "user1")
session := loginUser(t, "user1") // FIXME: don't use admin user for testing
testEditFileToNewBranch(t, session, "user2", "repo1", "master", "pr-squash-empty", "README.md", "Hello, World (Edited)\n")
resp := testPullCreate(t, session, "user2", "repo1", false, "master", "pr-squash-empty", "This is a pull title")

View File

@@ -11,7 +11,6 @@ import (
"time"
auth_model "code.gitea.io/gitea/models/auth"
git_model "code.gitea.io/gitea/models/git"
issues_model "code.gitea.io/gitea/models/issues"
"code.gitea.io/gitea/models/perm"
repo_model "code.gitea.io/gitea/models/repo"
@@ -113,49 +112,6 @@ func TestAPIPullUpdateByRebase(t *testing.T) {
})
}
func TestAPIPullUpdateByRebase2(t *testing.T) {
onGiteaRun(t, func(t *testing.T, giteaURL *url.URL) {
// Create PR to test
user := unittest.AssertExistsAndLoadBean(t, &user_model.User{ID: 2})
org26 := unittest.AssertExistsAndLoadBean(t, &user_model.User{ID: 26})
pr := createOutdatedPR(t, user, org26)
assert.NoError(t, pr.LoadBaseRepo(t.Context()))
assert.NoError(t, pr.LoadIssue(t.Context()))
enableRepoAllowUpdateWithRebase(t, pr.BaseRepo.ID, false)
session := loginUser(t, "user2")
token := getTokenForLoggedInUser(t, session, auth_model.AccessTokenScopeWriteRepository)
req := NewRequestf(t, "POST", "/api/v1/repos/%s/%s/pulls/%d/update?style=rebase", pr.BaseRepo.OwnerName, pr.BaseRepo.Name, pr.Issue.Index).
AddTokenAuth(token)
session.MakeRequest(t, req, http.StatusForbidden)
enableRepoAllowUpdateWithRebase(t, pr.BaseRepo.ID, true)
assert.NoError(t, pr.LoadHeadRepo(t.Context()))
// add a protected branch rule to the head branch to block rebase
pb := git_model.ProtectedBranch{
RepoID: pr.HeadRepo.ID,
RuleName: pr.HeadBranch,
CanPush: false,
CanForcePush: false,
}
err := git_model.UpdateProtectBranch(t.Context(), pr.HeadRepo, &pb, git_model.WhitelistOptions{})
assert.NoError(t, err)
req = NewRequestf(t, "POST", "/api/v1/repos/%s/%s/pulls/%d/update?style=rebase", pr.BaseRepo.OwnerName, pr.BaseRepo.Name, pr.Issue.Index).
AddTokenAuth(token)
session.MakeRequest(t, req, http.StatusForbidden)
// remove the protected branch rule to allow rebase
err = git_model.DeleteProtectedBranch(t.Context(), pr.HeadRepo, pb.ID)
assert.NoError(t, err)
req = NewRequestf(t, "POST", "/api/v1/repos/%s/%s/pulls/%d/update?style=rebase", pr.BaseRepo.OwnerName, pr.BaseRepo.Name, pr.Issue.Index).
AddTokenAuth(token)
session.MakeRequest(t, req, http.StatusOK)
})
}
func createOutdatedPR(t *testing.T, actor, forkOrg *user_model.User) *issues_model.PullRequest {
baseRepo, err := repo_service.CreateRepository(t.Context(), actor, actor, repo_service.CreateRepoOptions{
Name: "repo-pr-update",

View File

@@ -180,6 +180,36 @@ func TestUserSettingsUpdateEmail(t *testing.T) {
})
session.MakeRequest(t, req, http.StatusNotFound)
})
t.Run("primary email not found", func(t *testing.T) {
defer tests.PrintCurrentTest(t)()
session := loginUser(t, "user2")
req := NewRequestWithValues(t, "POST", "/user/settings/account/email", map[string]string{
"_method": "PRIMARY",
"id": "9999",
"_csrf": GetUserCSRFToken(t, session),
})
resp := session.MakeRequest(t, req, http.StatusSeeOther)
assert.Equal(t, "/user/settings/account", resp.Header().Get("Location"))
flashMsg := session.GetCookieFlashMessage()
assert.Equal(t, "The selected email address could not be found.", flashMsg.ErrorMsg)
})
t.Run("primary email not owned by user", func(t *testing.T) {
defer tests.PrintCurrentTest(t)()
session := loginUser(t, "user2")
req := NewRequestWithValues(t, "POST", "/user/settings/account/email", map[string]string{
"_method": "PRIMARY",
"id": "6",
"_csrf": GetUserCSRFToken(t, session),
})
resp := session.MakeRequest(t, req, http.StatusSeeOther)
assert.Equal(t, "/user/settings/account", resp.Header().Get("Location"))
flashMsg := session.GetCookieFlashMessage()
assert.Equal(t, "The selected email address could not be found.", flashMsg.ErrorMsg)
})
}
func TestUserSettingsDeleteEmail(t *testing.T) {

View File

@@ -35,9 +35,10 @@ const baseOptions: MonacoOpts = {
renderLineHighlight: 'all',
renderLineHighlightOnlyWhenFocus: true,
rulers: [],
scrollbar: {horizontalScrollbarSize: 6, verticalScrollbarSize: 6},
scrollbar: {horizontalScrollbarSize: 6, verticalScrollbarSize: 6, alwaysConsumeMouseWheel: false},
scrollBeyondLastLine: false,
automaticLayout: true,
editContext: false, // https://github.com/microsoft/monaco-editor/issues/5081
};
function getEditorconfig(input: HTMLInputElement): EditorConfig | null {

View File

@@ -27,7 +27,7 @@ function getDefaultSvgBoundsIfUndefined(text: string, src: string) {
const viewBox = svg.viewBox.baseVal;
return {
width: defaultSize,
height: defaultSize * viewBox.width / viewBox.height,
height: defaultSize * viewBox.height / viewBox.width,
};
}
return {

View File

@@ -170,7 +170,9 @@ async function loadMoreFiles(btn: Element): Promise<boolean> {
const respFileBoxes = respDoc.querySelector('#diff-file-boxes');
// the response is a full HTML page, we need to extract the relevant contents:
// * append the newly loaded file list items to the existing list
document.querySelector('#diff-incomplete').replaceWith(...Array.from(respFileBoxes.children));
const respFileBoxesChildren = Array.from(respFileBoxes.children); // "children:HTMLCollection" will be empty after replaceWith
document.querySelector('#diff-incomplete').replaceWith(...respFileBoxesChildren);
for (const el of respFileBoxesChildren) window.htmx.process(el);
onShowMoreFiles();
return true;
} catch (error) {
@@ -200,7 +202,7 @@ function initRepoDiffShowMore() {
const resp = await response.text();
const respDoc = parseDom(resp, 'text/html');
const respFileBody = respDoc.querySelector('#diff-file-boxes .diff-file-body .file-body');
const respFileBodyChildren = Array.from(respFileBody.children); // respFileBody.children will be empty after replaceWith
const respFileBodyChildren = Array.from(respFileBody.children); // "children:HTMLCollection" will be empty after replaceWith
el.parentElement.replaceWith(...respFileBodyChildren);
for (const el of respFileBodyChildren) window.htmx.process(el);
// FIXME: calling onShowMoreFiles is not quite right here.

View File

@@ -23,6 +23,16 @@ When the selected items change, the `combo-value` input will be updated.
If there is `data-update-url`, it also calls backend to attach/detach the changed items.
Also, the changed items will be synchronized to the `ui list` items.
The menu items must have correct `href`, otherwise the links of synchronized (cloned) items would be wrong.
Synchronization logic:
* On page load:
* If the dropdown menu contains checked items, there will be no synchronization.
In this case, it's assumed that the dropdown menu is already in sync with the list.
* If the dropdown menu doesn't contain checked items, it will use dropdown's value to mark the selected items as checked.
And the selected (checked) items will be synchronized to the list.
* On dropdown selection change:
* The selected items will be synchronized to the list after the dropdown is hidden
The items with the same data-scope only allow one selected at a time.

View File

@@ -1,4 +1,4 @@
import {isDarkTheme} from '../utils.ts';
import {isDarkTheme, parseDom} from '../utils.ts';
import {makeCodeCopyButton} from './codecopy.ts';
import {displayError} from './common.ts';
import {queryElems} from '../utils/dom.ts';
@@ -43,12 +43,19 @@ export async function initMarkupCodeMermaid(elMarkup: HTMLElement): Promise<void
try {
// can't use bindFunctions here because we can't cross the iframe boundary. This
// means js-based interactions won't work but they aren't intended to work either
const {svg} = await mermaid.render('mermaid', source);
const {svg} = await mermaid.render('mermaid', source, pre);
const svgDoc = parseDom(svg, 'image/svg+xml');
const svgNode = (svgDoc.documentElement as unknown) as SVGSVGElement;
const iframe = document.createElement('iframe');
iframe.classList.add('markup-content-iframe', 'tw-invisible');
iframe.srcdoc = html`<html><head><style>${htmlRaw(iframeCss)}</style></head><body>${htmlRaw(svg)}</body></html>`;
// although the "viewBox" is optional, mermaid's output should always have a correct viewBox with width and height
const iframeHeightFromViewBox = Math.ceil(svgNode.viewBox?.baseVal?.height ?? 0);
if (iframeHeightFromViewBox) iframe.style.height = `${iframeHeightFromViewBox}px`;
// FIXME: the logic is not right, the full fix is on main branch
const mermaidBlock = document.createElement('div');
mermaidBlock.classList.add('mermaid-block', 'is-loading', 'tw-hidden');
mermaidBlock.append(iframe);
@@ -59,11 +66,12 @@ export async function initMarkupCodeMermaid(elMarkup: HTMLElement): Promise<void
const updateIframeHeight = () => {
const body = iframe.contentWindow?.document?.body;
if (body) {
if (body?.clientHeight) {
iframe.style.height = `${body.clientHeight}px`;
}
};
// FIXME: the logic is not right, the full fix is on main branch
iframe.addEventListener('load', () => {
pre.replaceWith(mermaidBlock);
mermaidBlock.classList.remove('tw-hidden');