1411 lines
51 KiB
Plaintext
1411 lines
51 KiB
Plaintext
|
_ _ ____ _
|
||
|
___| | | | _ \| |
|
||
|
/ __| | | | |_) | |
|
||
|
| (__| |_| | _ <| |___
|
||
|
\___|\___/|_| \_\_____|
|
||
|
|
||
|
Things that could be nice to do in the future
|
||
|
|
||
|
Things to do in project curl. Please tell us what you think, contribute and
|
||
|
send us patches that improve things.
|
||
|
|
||
|
Be aware that these are things that we could do, or have once been considered
|
||
|
things we could do. If you want to work on any of these areas, please
|
||
|
consider bringing it up for discussions first on the mailing list so that we
|
||
|
all agree it is still a good idea for the project.
|
||
|
|
||
|
All bugs documented in the KNOWN_BUGS document are subject for fixing.
|
||
|
|
||
|
1. libcurl
|
||
|
1.1 TFO support on Windows
|
||
|
1.2 Consult %APPDATA% also for .netrc
|
||
|
1.3 struct lifreq
|
||
|
1.4 Better and more sharing
|
||
|
1.5 get rid of PATH_MAX
|
||
|
1.6 native IDN support on macOS
|
||
|
1.8 CURLOPT_RESOLVE for any port number
|
||
|
1.9 Cache negative name resolves
|
||
|
1.10 auto-detect proxy
|
||
|
1.11 minimize dependencies with dynamically loaded modules
|
||
|
1.12 updated DNS server while running
|
||
|
1.13 c-ares and CURLOPT_OPENSOCKETFUNCTION
|
||
|
1.15 Monitor connections in the connection pool
|
||
|
1.16 Try to URL encode given URL
|
||
|
1.17 Add support for IRIs
|
||
|
1.18 try next proxy if one does not work
|
||
|
1.19 provide timing info for each redirect
|
||
|
1.20 SRV and URI DNS records
|
||
|
1.21 netrc caching and sharing
|
||
|
1.22 CURLINFO_PAUSE_STATE
|
||
|
1.23 Offer API to flush the connection pool
|
||
|
1.25 Expose tried IP addresses that failed
|
||
|
1.28 FD_CLOEXEC
|
||
|
1.29 WebSocket read callback
|
||
|
1.30 config file parsing
|
||
|
1.31 erase secrets from heap/stack after use
|
||
|
1.32 add asynch getaddrinfo support
|
||
|
1.33 make DoH inherit more transfer properties
|
||
|
|
||
|
2. libcurl - multi interface
|
||
|
2.1 More non-blocking
|
||
|
2.2 Better support for same name resolves
|
||
|
2.3 Non-blocking curl_multi_remove_handle()
|
||
|
2.4 Split connect and authentication process
|
||
|
2.5 Edge-triggered sockets should work
|
||
|
2.6 multi upkeep
|
||
|
2.7 Virtual external sockets
|
||
|
2.8 dynamically decide to use socketpair
|
||
|
|
||
|
3. Documentation
|
||
|
3.1 Improve documentation about fork safety
|
||
|
3.2 Provide cmake config-file
|
||
|
|
||
|
4. FTP
|
||
|
4.1 HOST
|
||
|
4.2 Alter passive/active on failure and retry
|
||
|
4.3 Earlier bad letter detection
|
||
|
4.4 Support CURLOPT_PREQUOTE for dir listings too
|
||
|
4.5 ASCII support
|
||
|
4.6 GSSAPI via Windows SSPI
|
||
|
4.7 STAT for LIST without data connection
|
||
|
4.8 Passive transfer could try other IP addresses
|
||
|
|
||
|
5. HTTP
|
||
|
5.1 Provide the error body from a CONNECT response
|
||
|
5.2 Obey Retry-After in redirects
|
||
|
5.3 Rearrange request header order
|
||
|
5.4 Allow SAN names in HTTP/2 server push
|
||
|
5.5 auth= in URLs
|
||
|
5.6 alt-svc should fallback if alt-svc does not work
|
||
|
5.7 Require HTTP version X or higher
|
||
|
|
||
|
6. TELNET
|
||
|
6.1 ditch stdin
|
||
|
6.2 ditch telnet-specific select
|
||
|
6.3 feature negotiation debug data
|
||
|
6.4 exit immediately upon connection if stdin is /dev/null
|
||
|
|
||
|
7. SMTP
|
||
|
7.1 Passing NOTIFY option to CURLOPT_MAIL_RCPT
|
||
|
7.2 Enhanced capability support
|
||
|
7.3 Add CURLOPT_MAIL_CLIENT option
|
||
|
|
||
|
8. POP3
|
||
|
8.2 Enhanced capability support
|
||
|
|
||
|
9. IMAP
|
||
|
9.1 Enhanced capability support
|
||
|
|
||
|
10. LDAP
|
||
|
10.1 SASL based authentication mechanisms
|
||
|
10.2 CURLOPT_SSL_CTX_FUNCTION for LDAPS
|
||
|
10.3 Paged searches on LDAP server
|
||
|
10.4 Certificate-Based Authentication
|
||
|
|
||
|
11. SMB
|
||
|
11.1 File listing support
|
||
|
11.2 Honor file timestamps
|
||
|
11.3 Use NTLMv2
|
||
|
11.4 Create remote directories
|
||
|
|
||
|
12. FILE
|
||
|
12.1 Directory listing for FILE:
|
||
|
|
||
|
13. TLS
|
||
|
13.1 TLS-PSK with OpenSSL
|
||
|
13.2 Provide mutex locking API
|
||
|
13.3 Defeat TLS fingerprinting
|
||
|
13.4 Cache/share OpenSSL contexts
|
||
|
13.5 Export session ids
|
||
|
13.6 Provide callback for cert verification
|
||
|
13.7 Less memory massaging with Schannel
|
||
|
13.8 Support DANE
|
||
|
13.9 TLS record padding
|
||
|
13.10 Support Authority Information Access certificate extension (AIA)
|
||
|
13.11 Some TLS options are not offered for HTTPS proxies
|
||
|
13.12 Reduce CA certificate bundle reparsing
|
||
|
13.13 Make sure we forbid TLS 1.3 post-handshake authentication
|
||
|
13.14 Support the clienthello extension
|
||
|
13.15 Select signature algorithms
|
||
|
|
||
|
14. GnuTLS
|
||
|
14.2 check connection
|
||
|
|
||
|
15. Schannel
|
||
|
15.1 Extend support for client certificate authentication
|
||
|
15.2 Extend support for the --ciphers option
|
||
|
15.4 Add option to allow abrupt server closure
|
||
|
|
||
|
16. SASL
|
||
|
16.1 Other authentication mechanisms
|
||
|
16.2 Add QOP support to GSSAPI authentication
|
||
|
|
||
|
17. SSH protocols
|
||
|
17.1 Multiplexing
|
||
|
17.2 Handle growing SFTP files
|
||
|
17.3 Read keys from ~/.ssh/id_ecdsa, id_ed25519
|
||
|
17.4 Support CURLOPT_PREQUOTE
|
||
|
17.5 SSH over HTTPS proxy with more backends
|
||
|
17.6 SFTP with SCP://
|
||
|
|
||
|
18. Command line tool
|
||
|
18.1 sync
|
||
|
18.2 glob posts
|
||
|
18.4 --proxycommand
|
||
|
18.5 UTF-8 filenames in Content-Disposition
|
||
|
18.6 Option to make -Z merge lined based outputs on stdout
|
||
|
18.8 Consider convenience options for JSON and XML?
|
||
|
18.9 Choose the name of file in braces for complex URLs
|
||
|
18.10 improve how curl works in a windows console window
|
||
|
18.11 Windows: set attribute 'archive' for completed downloads
|
||
|
18.12 keep running, read instructions from pipe/socket
|
||
|
18.13 Ratelimit or wait between serial requests
|
||
|
18.14 --dry-run
|
||
|
18.15 --retry should resume
|
||
|
18.16 send only part of --data
|
||
|
18.17 consider file name from the redirected URL with -O ?
|
||
|
18.18 retry on network is unreachable
|
||
|
18.19 expand ~/ in config files
|
||
|
18.20 host name sections in config files
|
||
|
18.21 retry on the redirected-to URL
|
||
|
18.23 Set the modification date on an uploaded file
|
||
|
18.24 Use multiple parallel transfers for a single download
|
||
|
18.25 Prevent terminal injection when writing to terminal
|
||
|
18.26 Custom progress meter update interval
|
||
|
18.27 -J and -O with %-encoded file names
|
||
|
18.28 -J with -C -
|
||
|
18.29 --retry and transfer timeouts
|
||
|
|
||
|
19. Build
|
||
|
19.2 Enable PIE and RELRO by default
|
||
|
19.3 Do not use GNU libtool on OpenBSD
|
||
|
19.4 Package curl for Windows in a signed installer
|
||
|
19.5 make configure use --cache-file more and better
|
||
|
19.6 build curl with Windows Unicode support
|
||
|
|
||
|
20. Test suite
|
||
|
20.1 SSL tunnel
|
||
|
20.2 nicer lacking perl message
|
||
|
20.3 more protocols supported
|
||
|
20.4 more platforms supported
|
||
|
20.5 Add support for concurrent connections
|
||
|
20.6 Use the RFC 6265 test suite
|
||
|
20.7 Support LD_PRELOAD on macOS
|
||
|
20.8 Run web-platform-tests URL tests
|
||
|
|
||
|
21. MQTT
|
||
|
21.1 Support rate-limiting
|
||
|
|
||
|
22. TFTP
|
||
|
22.1 TFTP doesn't convert LF to CRLF for mode=netascii
|
||
|
|
||
|
==============================================================================
|
||
|
|
||
|
1. libcurl
|
||
|
|
||
|
1.1 TFO support on Windows
|
||
|
|
||
|
libcurl supports the CURLOPT_TCP_FASTOPEN option since 7.49.0 for Linux and
|
||
|
Mac OS. Windows supports TCP Fast Open starting with Windows 10, version 1607
|
||
|
and we should add support for it.
|
||
|
|
||
|
TCP Fast Open is supported on several platforms but not on Windows. Work on
|
||
|
this was once started but never finished.
|
||
|
|
||
|
See https://github.com/curl/curl/pull/3378
|
||
|
|
||
|
1.2 Consult %APPDATA% also for .netrc
|
||
|
|
||
|
%APPDATA%\.netrc is not considered when running on Windows. should not it?
|
||
|
|
||
|
See https://github.com/curl/curl/issues/4016
|
||
|
|
||
|
1.3 struct lifreq
|
||
|
|
||
|
Use 'struct lifreq' and SIOCGLIFADDR instead of 'struct ifreq' and
|
||
|
SIOCGIFADDR on newer Solaris versions as they claim the latter is obsolete.
|
||
|
To support IPv6 interface addresses for network interfaces properly.
|
||
|
|
||
|
1.4 Better and more sharing
|
||
|
|
||
|
The share interface could benefit from allowing the alt-svc cache to be
|
||
|
possible to share between easy handles.
|
||
|
|
||
|
See https://github.com/curl/curl/issues/4476
|
||
|
|
||
|
The share interface offers CURL_LOCK_DATA_CONNECT to have multiple easy
|
||
|
handle share a connection cache, but due to how connections are used they are
|
||
|
still not thread-safe when used shared.
|
||
|
|
||
|
See https://github.com/curl/curl/issues/4915 and lib1541.c
|
||
|
|
||
|
The share interface offers CURL_LOCK_DATA_HSTS to have multiple easy handle
|
||
|
share a HSTS cache, but this is not thread-safe.
|
||
|
|
||
|
1.5 get rid of PATH_MAX
|
||
|
|
||
|
Having code use and rely on PATH_MAX is not nice:
|
||
|
https://insanecoding.blogspot.com/2007/11/pathmax-simply-isnt.html
|
||
|
|
||
|
Currently the libssh2 SSH based code uses it, but to remove PATH_MAX from
|
||
|
there we need libssh2 to properly tell us when we pass in a too small buffer
|
||
|
and its current API (as of libssh2 1.2.7) does not.
|
||
|
|
||
|
1.6 native IDN support on macOS
|
||
|
|
||
|
On recent macOS versions, the getaddrinfo() function itself has built-in IDN
|
||
|
support. By setting the AI_CANONNAME flag, the function will return the
|
||
|
encoded name in the ai_canonname struct field in the returned information.
|
||
|
This could be used by curl on macOS when built without a separate IDN library
|
||
|
and an IDN host name is used in a URL.
|
||
|
|
||
|
See initial work in https://github.com/curl/curl/pull/5371
|
||
|
|
||
|
1.8 CURLOPT_RESOLVE for any port number
|
||
|
|
||
|
This option allows applications to set a replacement IP address for a given
|
||
|
host + port pair. Consider making support for providing a replacement address
|
||
|
for the host name on all port numbers.
|
||
|
|
||
|
See https://github.com/curl/curl/issues/1264
|
||
|
|
||
|
1.9 Cache negative name resolves
|
||
|
|
||
|
A name resolve that has failed is likely to fail when made again within a
|
||
|
short period of time. Currently we only cache positive responses.
|
||
|
|
||
|
1.10 auto-detect proxy
|
||
|
|
||
|
libcurl could be made to detect the system proxy setup automatically and use
|
||
|
that. On Windows, macOS and Linux desktops for example.
|
||
|
|
||
|
The pull-request to use libproxy for this was deferred due to doubts on the
|
||
|
reliability of the dependency and how to use it:
|
||
|
https://github.com/curl/curl/pull/977
|
||
|
|
||
|
libdetectproxy is a (C++) library for detecting the proxy on Windows
|
||
|
https://github.com/paulharris/libdetectproxy
|
||
|
|
||
|
1.11 minimize dependencies with dynamically loaded modules
|
||
|
|
||
|
We can create a system with loadable modules/plug-ins, where these modules
|
||
|
would be the ones that link to 3rd party libs. That would allow us to avoid
|
||
|
having to load ALL dependencies since only the necessary ones for this
|
||
|
app/invoke/used protocols would be necessary to load. See
|
||
|
https://github.com/curl/curl/issues/349
|
||
|
|
||
|
1.12 updated DNS server while running
|
||
|
|
||
|
If /etc/resolv.conf gets updated while a program using libcurl is running, it
|
||
|
is may cause name resolves to fail unless res_init() is called. We should
|
||
|
consider calling res_init() + retry once unconditionally on all name resolve
|
||
|
failures to mitigate against this. Firefox works like that. Note that Windows
|
||
|
does not have res_init() or an alternative.
|
||
|
|
||
|
https://github.com/curl/curl/issues/2251
|
||
|
|
||
|
1.13 c-ares and CURLOPT_OPENSOCKETFUNCTION
|
||
|
|
||
|
curl will create most sockets via the CURLOPT_OPENSOCKETFUNCTION callback and
|
||
|
close them with the CURLOPT_CLOSESOCKETFUNCTION callback. However, c-ares
|
||
|
does not use those functions and instead opens and closes the sockets
|
||
|
itself. This means that when curl passes the c-ares socket to the
|
||
|
CURLMOPT_SOCKETFUNCTION it is not owned by the application like other sockets.
|
||
|
|
||
|
See https://github.com/curl/curl/issues/2734
|
||
|
|
||
|
1.15 Monitor connections in the connection pool
|
||
|
|
||
|
libcurl's connection cache or pool holds a number of open connections for the
|
||
|
purpose of possible subsequent connection reuse. It may contain a few up to a
|
||
|
significant amount of connections. Currently, libcurl leaves all connections
|
||
|
as they are and first when a connection is iterated over for matching or
|
||
|
reuse purpose it is verified that it is still alive.
|
||
|
|
||
|
Those connections may get closed by the server side for idleness or they may
|
||
|
get an HTTP/2 ping from the peer to verify that they are still alive. By
|
||
|
adding monitoring of the connections while in the pool, libcurl can detect
|
||
|
dead connections (and close them) better and earlier, and it can handle
|
||
|
HTTP/2 pings to keep such ones alive even when not actively doing transfers
|
||
|
on them.
|
||
|
|
||
|
1.16 Try to URL encode given URL
|
||
|
|
||
|
Given a URL that for example contains spaces, libcurl could have an option
|
||
|
that would try somewhat harder than it does now and convert spaces to %20 and
|
||
|
perhaps URL encoded byte values over 128 etc (basically do what the redirect
|
||
|
following code already does).
|
||
|
|
||
|
https://github.com/curl/curl/issues/514
|
||
|
|
||
|
1.17 Add support for IRIs
|
||
|
|
||
|
IRIs (RFC 3987) allow localized, non-ascii, names in the URL. To properly
|
||
|
support this, curl/libcurl would need to translate/encode the given input
|
||
|
from the input string encoding into percent encoded output "over the wire".
|
||
|
|
||
|
To make that work smoothly for curl users even on Windows, curl would
|
||
|
probably need to be able to convert from several input encodings.
|
||
|
|
||
|
1.18 try next proxy if one does not work
|
||
|
|
||
|
Allow an application to specify a list of proxies to try, and failing to
|
||
|
connect to the first go on and try the next instead until the list is
|
||
|
exhausted. Browsers support this feature at least when they specify proxies
|
||
|
using PACs.
|
||
|
|
||
|
https://github.com/curl/curl/issues/896
|
||
|
|
||
|
1.19 provide timing info for each redirect
|
||
|
|
||
|
curl and libcurl provide timing information via a set of different
|
||
|
time-stamps (CURLINFO_*_TIME). When curl is following redirects, those
|
||
|
returned time value are the accumulated sums. An improvement could be to
|
||
|
offer separate timings for each redirect.
|
||
|
|
||
|
https://github.com/curl/curl/issues/6743
|
||
|
|
||
|
1.20 SRV and URI DNS records
|
||
|
|
||
|
Offer support for resolving SRV and URI DNS records for libcurl to know which
|
||
|
server to connect to for various protocols (including HTTP).
|
||
|
|
||
|
1.21 netrc caching and sharing
|
||
|
|
||
|
The netrc file is read and parsed each time a connection is setup, which
|
||
|
means that if a transfer needs multiple connections for authentication or
|
||
|
redirects, the file might be reread (and parsed) multiple times. This makes
|
||
|
it impossible to provide the file as a pipe.
|
||
|
|
||
|
1.22 CURLINFO_PAUSE_STATE
|
||
|
|
||
|
Return information about the transfer's current pause state, in both
|
||
|
directions. https://github.com/curl/curl/issues/2588
|
||
|
|
||
|
1.23 Offer API to flush the connection pool
|
||
|
|
||
|
Sometimes applications want to flush all the existing connections kept alive.
|
||
|
An API could allow a forced flush or just a forced loop that would properly
|
||
|
close all connections that have been closed by the server already.
|
||
|
|
||
|
1.25 Expose tried IP addresses that failed
|
||
|
|
||
|
When libcurl fails to connect to a host, it could offer the application the
|
||
|
addresses that were used in the attempt. Source + dest IP, source + dest port
|
||
|
and protocol (UDP or TCP) for each failure. Possibly as a callback. Perhaps
|
||
|
also provide "reason".
|
||
|
|
||
|
https://github.com/curl/curl/issues/2126
|
||
|
|
||
|
1.28 FD_CLOEXEC
|
||
|
|
||
|
It sets the close-on-exec flag for the file descriptor, which causes the file
|
||
|
descriptor to be automatically (and atomically) closed when any of the
|
||
|
exec-family functions succeed. Should probably be set by default?
|
||
|
|
||
|
https://github.com/curl/curl/issues/2252
|
||
|
|
||
|
1.29 WebSocket read callback
|
||
|
|
||
|
Call the read callback once the connection is established to allow sending
|
||
|
the first message in the connection.
|
||
|
|
||
|
https://github.com/curl/curl/issues/11402
|
||
|
|
||
|
1.30 config file parsing
|
||
|
|
||
|
Consider providing an API, possibly in a separate companion library, for
|
||
|
parsing a config file like curl's -K/--config option to allow applications to
|
||
|
get the same ability to read curl options from files.
|
||
|
|
||
|
See https://github.com/curl/curl/issues/3698
|
||
|
|
||
|
1.31 erase secrets from heap/stack after use
|
||
|
|
||
|
Introducing a concept and system to erase secrets from memory after use, it
|
||
|
could help mitigate and lessen the impact of (future) security problems etc.
|
||
|
However: most secrets are passed to libcurl as clear text from the
|
||
|
application and then clearing them within the library adds nothing...
|
||
|
|
||
|
https://github.com/curl/curl/issues/7268
|
||
|
|
||
|
1.32 add asynch getaddrinfo support
|
||
|
|
||
|
Use getaddrinfo_a() to provide an asynch name resolver backend to libcurl
|
||
|
that does not use threads and does not depend on c-ares. The getaddrinfo_a
|
||
|
function is (probably?) glibc specific but that is a widely used libc among
|
||
|
our users.
|
||
|
|
||
|
https://github.com/curl/curl/pull/6746
|
||
|
|
||
|
1.33 make DoH inherit more transfer properties
|
||
|
|
||
|
Some options are not inherited because they are not relevant for the DoH SSL
|
||
|
connections, or inheriting the option may result in unexpected behavior. For
|
||
|
example the user's debug function callback is not inherited because it would
|
||
|
be unexpected for internal handles (ie DoH handles) to be passed to that
|
||
|
callback.
|
||
|
|
||
|
If an option is not inherited then it is not possible to set it separately
|
||
|
for DoH without a DoH-specific option. For example:
|
||
|
CURLOPT_DOH_SSL_VERIFYHOST, CURLOPT_DOH_SSL_VERIFYPEER and
|
||
|
CURLOPT_DOH_SSL_VERIFYSTATUS.
|
||
|
|
||
|
See https://github.com/curl/curl/issues/6605
|
||
|
|
||
|
2. libcurl - multi interface
|
||
|
|
||
|
2.1 More non-blocking
|
||
|
|
||
|
Make sure we do not ever loop because of non-blocking sockets returning
|
||
|
EWOULDBLOCK or similar. Blocking cases include:
|
||
|
|
||
|
- Name resolves on non-windows unless c-ares or the threaded resolver is used.
|
||
|
|
||
|
- The threaded resolver may block on cleanup:
|
||
|
https://github.com/curl/curl/issues/4852
|
||
|
|
||
|
- file:// transfers
|
||
|
|
||
|
- TELNET transfers
|
||
|
|
||
|
- GSSAPI authentication for FTP transfers
|
||
|
|
||
|
- The "DONE" operation (post transfer protocol-specific actions) for the
|
||
|
protocols SFTP, SMTP, FTP. Fixing multi_done() for this is a worthy task.
|
||
|
|
||
|
- curl_multi_remove_handle for any of the above. See section 2.3.
|
||
|
|
||
|
2.2 Better support for same name resolves
|
||
|
|
||
|
If a name resolve has been initiated for name NN and a second easy handle
|
||
|
wants to resolve that name as well, make it wait for the first resolve to end
|
||
|
up in the cache instead of doing a second separate resolve. This is
|
||
|
especially needed when adding many simultaneous handles using the same host
|
||
|
name when the DNS resolver can get flooded.
|
||
|
|
||
|
2.3 Non-blocking curl_multi_remove_handle()
|
||
|
|
||
|
The multi interface has a few API calls that assume a blocking behavior, like
|
||
|
add_handle() and remove_handle() which limits what we can do internally. The
|
||
|
multi API need to be moved even more into a single function that "drives"
|
||
|
everything in a non-blocking manner and signals when something is done. A
|
||
|
remove or add would then only ask for the action to get started and then
|
||
|
multi_perform() etc still be called until the add/remove is completed.
|
||
|
|
||
|
2.4 Split connect and authentication process
|
||
|
|
||
|
The multi interface treats the authentication process as part of the connect
|
||
|
phase. As such any failures during authentication will not trigger the relevant
|
||
|
QUIT or LOGOFF for protocols such as IMAP, POP3 and SMTP.
|
||
|
|
||
|
2.5 Edge-triggered sockets should work
|
||
|
|
||
|
The multi_socket API should work with edge-triggered socket events. One of
|
||
|
the internal actions that need to be improved for this to work perfectly is
|
||
|
the 'maxloops' handling in transfer.c:readwrite_data().
|
||
|
|
||
|
2.6 multi upkeep
|
||
|
|
||
|
In libcurl 7.62.0 we introduced curl_easy_upkeep. It unfortunately only works
|
||
|
on easy handles. We should introduces a version of that for the multi handle,
|
||
|
and also consider doing "upkeep" automatically on connections in the
|
||
|
connection pool when the multi handle is in used.
|
||
|
|
||
|
See https://github.com/curl/curl/issues/3199
|
||
|
|
||
|
2.7 Virtual external sockets
|
||
|
|
||
|
libcurl performs operations on the given file descriptor that presumes it is
|
||
|
a socket and an application cannot replace them at the moment. Allowing an
|
||
|
application to fully replace those would allow a larger degree of freedom and
|
||
|
flexibility.
|
||
|
|
||
|
See https://github.com/curl/curl/issues/5835
|
||
|
|
||
|
2.8 dynamically decide to use socketpair
|
||
|
|
||
|
For users who do not use curl_multi_wait() or do not care for
|
||
|
curl_multi_wakeup(), we could introduce a way to make libcurl NOT
|
||
|
create a socketpair in the multi handle.
|
||
|
|
||
|
See https://github.com/curl/curl/issues/4829
|
||
|
|
||
|
3. Documentation
|
||
|
|
||
|
3.1 Improve documentation about fork safety
|
||
|
|
||
|
See https://github.com/curl/curl/issues/6968
|
||
|
|
||
|
3.2 Provide cmake config-file
|
||
|
|
||
|
A config-file package is a set of files provided by us to allow applications
|
||
|
to write cmake scripts to find and use libcurl easier. See
|
||
|
https://github.com/curl/curl/issues/885
|
||
|
|
||
|
4. FTP
|
||
|
|
||
|
4.1 HOST
|
||
|
|
||
|
HOST is a command for a client to tell which host name to use, to offer FTP
|
||
|
servers named-based virtual hosting:
|
||
|
|
||
|
https://datatracker.ietf.org/doc/html/rfc7151
|
||
|
|
||
|
4.2 Alter passive/active on failure and retry
|
||
|
|
||
|
When trying to connect passively to a server which only supports active
|
||
|
connections, libcurl returns CURLE_FTP_WEIRD_PASV_REPLY and closes the
|
||
|
connection. There could be a way to fallback to an active connection (and
|
||
|
vice versa). https://curl.se/bug/feature.cgi?id=1754793
|
||
|
|
||
|
4.3 Earlier bad letter detection
|
||
|
|
||
|
Make the detection of (bad) %0d and %0a codes in FTP URL parts earlier in the
|
||
|
process to avoid doing a resolve and connect in vain.
|
||
|
|
||
|
4.4 Support CURLOPT_PREQUOTE for dir listings too
|
||
|
|
||
|
The lack of support is mostly an oversight and requires the FTP state machine
|
||
|
to get updated to get fixed.
|
||
|
|
||
|
https://github.com/curl/curl/issues/8602
|
||
|
|
||
|
4.5 ASCII support
|
||
|
|
||
|
FTP ASCII transfers do not follow RFC 959. They do not convert the data
|
||
|
accordingly.
|
||
|
|
||
|
4.6 GSSAPI via Windows SSPI
|
||
|
|
||
|
In addition to currently supporting the SASL GSSAPI mechanism (Kerberos V5)
|
||
|
via third-party GSS-API libraries, such as Heimdal or MIT Kerberos, also add
|
||
|
support for GSSAPI authentication via Windows SSPI.
|
||
|
|
||
|
4.7 STAT for LIST without data connection
|
||
|
|
||
|
Some FTP servers allow STAT for listing directories instead of using LIST,
|
||
|
and the response is then sent over the control connection instead of as the
|
||
|
otherwise usedw data connection: https://www.nsftools.com/tips/RawFTP.htm#STAT
|
||
|
|
||
|
This is not detailed in any FTP specification.
|
||
|
|
||
|
4.8 Passive transfer could try other IP addresses
|
||
|
|
||
|
When doing FTP operations through a proxy at localhost, the reported spotted
|
||
|
that curl only tried to connect once to the proxy, while it had multiple
|
||
|
addresses and a failed connect on one address should make it try the next.
|
||
|
|
||
|
After switching to passive mode (EPSV), curl could try all IP addresses for
|
||
|
"localhost". Currently it tries ::1, but it should also try 127.0.0.1.
|
||
|
|
||
|
See https://github.com/curl/curl/issues/1508
|
||
|
|
||
|
5. HTTP
|
||
|
|
||
|
5.1 Provide the error body from a CONNECT response
|
||
|
|
||
|
When curl receives a body response from a CONNECT request to a proxy, it will
|
||
|
always just read and ignore it. It would make some users happy if curl
|
||
|
instead optionally would be able to make that responsible available. Via a new
|
||
|
callback? Through some other means?
|
||
|
|
||
|
See https://github.com/curl/curl/issues/9513
|
||
|
|
||
|
5.2 Obey Retry-After in redirects
|
||
|
|
||
|
The Retry-After is said to dicate "the minimum time that the user agent is
|
||
|
asked to wait before issuing the redirected request" and libcurl does not
|
||
|
obey this.
|
||
|
|
||
|
See https://github.com/curl/curl/issues/11447
|
||
|
|
||
|
5.3 Rearrange request header order
|
||
|
|
||
|
Server implementers often make an effort to detect browser and to reject
|
||
|
clients it can detect to not match. One of the last details we cannot yet
|
||
|
control in libcurl's HTTP requests, which also can be exploited to detect
|
||
|
that libcurl is in fact used even when it tries to impersonate a browser, is
|
||
|
the order of the request headers. I propose that we introduce a new option in
|
||
|
which you give headers a value, and then when the HTTP request is built it
|
||
|
sorts the headers based on that number. We could then have internally created
|
||
|
headers use a default value so only headers that need to be moved have to be
|
||
|
specified.
|
||
|
|
||
|
5.4 Allow SAN names in HTTP/2 server push
|
||
|
|
||
|
curl only allows HTTP/2 push promise if the provided :authority header value
|
||
|
exactly matches the host name given in the URL. It could be extended to allow
|
||
|
any name that would match the Subject Alternative Names in the server's TLS
|
||
|
certificate.
|
||
|
|
||
|
See https://github.com/curl/curl/pull/3581
|
||
|
|
||
|
5.5 auth= in URLs
|
||
|
|
||
|
Add the ability to specify the preferred authentication mechanism to use by
|
||
|
using ;auth=<mech> in the login part of the URL.
|
||
|
|
||
|
For example:
|
||
|
|
||
|
http://test:pass;auth=NTLM@example.com would be equivalent to specifying
|
||
|
--user test:pass;auth=NTLM or --user test:pass --ntlm from the command line.
|
||
|
|
||
|
Additionally this should be implemented for proxy base URLs as well.
|
||
|
|
||
|
5.6 alt-svc should fallback if alt-svc does not work
|
||
|
|
||
|
The alt-svc: header provides a set of alternative services for curl to use
|
||
|
instead of the original. If the first attempted one fails, it should try the
|
||
|
next etc and if all alternatives fail go back to the original.
|
||
|
|
||
|
See https://github.com/curl/curl/issues/4908
|
||
|
|
||
|
5.7 Require HTTP version X or higher
|
||
|
|
||
|
curl and libcurl provide options for trying higher HTTP versions (for example
|
||
|
HTTP/2) but then still allows the server to pick version 1.1. We could
|
||
|
consider adding a way to require a minimum version.
|
||
|
|
||
|
See https://github.com/curl/curl/issues/7980
|
||
|
|
||
|
6. TELNET
|
||
|
|
||
|
6.1 ditch stdin
|
||
|
|
||
|
Reading input (to send to the remote server) on stdin is a crappy solution
|
||
|
for library purposes. We need to invent a good way for the application to be
|
||
|
able to provide the data to send.
|
||
|
|
||
|
6.2 ditch telnet-specific select
|
||
|
|
||
|
Move the telnet support's network select() loop go away and merge the code
|
||
|
into the main transfer loop. Until this is done, the multi interface will not
|
||
|
work for telnet.
|
||
|
|
||
|
6.3 feature negotiation debug data
|
||
|
|
||
|
Add telnet feature negotiation data to the debug callback as header data.
|
||
|
|
||
|
6.4 exit immediately upon connection if stdin is /dev/null
|
||
|
|
||
|
If it did, curl could be used to probe if there is an server there listening
|
||
|
on a specific port. That is, the following command would exit immediately
|
||
|
after the connection is established with exit code 0:
|
||
|
|
||
|
curl -s --connect-timeout 2 telnet://example.com:80 </dev/null
|
||
|
|
||
|
7. SMTP
|
||
|
|
||
|
7.1 Passing NOTIFY option to CURLOPT_MAIL_RCPT
|
||
|
|
||
|
Is there a way to pass the NOTIFY option to the CURLOPT_MAIL_RCPT option ? I
|
||
|
set a string that already contains a bracket. For instance something like
|
||
|
that: curl_slist_append( recipients, "<foo@bar> NOTIFY=SUCCESS,FAILURE" );
|
||
|
|
||
|
https://github.com/curl/curl/issues/8232
|
||
|
|
||
|
7.2 Enhanced capability support
|
||
|
|
||
|
Add the ability, for an application that uses libcurl, to obtain the list of
|
||
|
capabilities returned from the EHLO command.
|
||
|
|
||
|
7.3 Add CURLOPT_MAIL_CLIENT option
|
||
|
|
||
|
Rather than use the URL to specify the mail client string to present in the
|
||
|
HELO and EHLO commands, libcurl should support a new CURLOPT specifically for
|
||
|
specifying this data as the URL is non-standard and to be honest a bit of a
|
||
|
hack ;-)
|
||
|
|
||
|
Please see the following thread for more information:
|
||
|
https://curl.se/mail/lib-2012-05/0178.html
|
||
|
|
||
|
|
||
|
8. POP3
|
||
|
|
||
|
8.2 Enhanced capability support
|
||
|
|
||
|
Add the ability, for an application that uses libcurl, to obtain the list of
|
||
|
capabilities returned from the CAPA command.
|
||
|
|
||
|
9. IMAP
|
||
|
|
||
|
9.1 Enhanced capability support
|
||
|
|
||
|
Add the ability, for an application that uses libcurl, to obtain the list of
|
||
|
capabilities returned from the CAPABILITY command.
|
||
|
|
||
|
10. LDAP
|
||
|
|
||
|
10.1 SASL based authentication mechanisms
|
||
|
|
||
|
Currently the LDAP module only supports ldap_simple_bind_s() in order to bind
|
||
|
to an LDAP server. However, this function sends username and password details
|
||
|
using the simple authentication mechanism (as clear text). However, it should
|
||
|
be possible to use ldap_bind_s() instead specifying the security context
|
||
|
information ourselves.
|
||
|
|
||
|
10.2 CURLOPT_SSL_CTX_FUNCTION for LDAPS
|
||
|
|
||
|
CURLOPT_SSL_CTX_FUNCTION works perfectly for HTTPS and email protocols, but
|
||
|
it has no effect for LDAPS connections.
|
||
|
|
||
|
https://github.com/curl/curl/issues/4108
|
||
|
|
||
|
10.3 Paged searches on LDAP server
|
||
|
|
||
|
https://github.com/curl/curl/issues/4452
|
||
|
|
||
|
10.4 Certificate-Based Authentication
|
||
|
|
||
|
LDAPS not possible with MAC and Windows with Certificate-Based Authentication
|
||
|
|
||
|
https://github.com/curl/curl/issues/9641
|
||
|
|
||
|
11. SMB
|
||
|
|
||
|
11.1 File listing support
|
||
|
|
||
|
Add support for listing the contents of a SMB share. The output should
|
||
|
probably be the same as/similar to FTP.
|
||
|
|
||
|
11.2 Honor file timestamps
|
||
|
|
||
|
The timestamp of the transferred file should reflect that of the original
|
||
|
file.
|
||
|
|
||
|
11.3 Use NTLMv2
|
||
|
|
||
|
Currently the SMB authentication uses NTLMv1.
|
||
|
|
||
|
11.4 Create remote directories
|
||
|
|
||
|
Support for creating remote directories when uploading a file to a directory
|
||
|
that does not exist on the server, just like --ftp-create-dirs.
|
||
|
|
||
|
|
||
|
12. FILE
|
||
|
|
||
|
12.1 Directory listing for FILE:
|
||
|
|
||
|
Add support for listing the contents of a directory accessed with FILE. The
|
||
|
output should probably be the same as/similar to FTP.
|
||
|
|
||
|
|
||
|
13. TLS
|
||
|
|
||
|
13.1 TLS-PSK with OpenSSL
|
||
|
|
||
|
Transport Layer Security pre-shared key ciphersuites (TLS-PSK) is a set of
|
||
|
cryptographic protocols that provide secure communication based on pre-shared
|
||
|
keys (PSKs). These pre-shared keys are symmetric keys shared in advance among
|
||
|
the communicating parties.
|
||
|
|
||
|
https://github.com/curl/curl/issues/5081
|
||
|
|
||
|
13.2 Provide mutex locking API
|
||
|
|
||
|
Provide a libcurl API for setting mutex callbacks in the underlying SSL
|
||
|
library, so that the same application code can use mutex-locking
|
||
|
independently of OpenSSL or GnutTLS being used.
|
||
|
|
||
|
13.3 Defeat TLS fingerprinting
|
||
|
|
||
|
By changing the order of TLS extensions provided in the TLS handshake, it is
|
||
|
sometimes possible to circumvent TLS fingerprinting by servers. The TLS
|
||
|
extension order is of course not the only way to fingerprint a client.
|
||
|
|
||
|
See https://github.com/curl/curl/issues/8119
|
||
|
|
||
|
13.4 Cache/share OpenSSL contexts
|
||
|
|
||
|
"Look at SSL cafile - quick traces look to me like these are done on every
|
||
|
request as well, when they should only be necessary once per SSL context (or
|
||
|
once per handle)". The major improvement we can rather easily do is to make
|
||
|
sure we do not create and kill a new SSL "context" for every request, but
|
||
|
instead make one for every connection and reuse that SSL context in the same
|
||
|
style connections are reused. It will make us use slightly more memory but it
|
||
|
will libcurl do less creations and deletions of SSL contexts.
|
||
|
|
||
|
Technically, the "caching" is probably best implemented by getting added to
|
||
|
the share interface so that easy handles who want to and can reuse the
|
||
|
context specify that by sharing with the right properties set.
|
||
|
|
||
|
https://github.com/curl/curl/issues/1110
|
||
|
|
||
|
13.5 Export session ids
|
||
|
|
||
|
Add an interface to libcurl that enables "session IDs" to get
|
||
|
exported/imported. Cris Bailiff said: "OpenSSL has functions which can
|
||
|
serialise the current SSL state to a buffer of your choice, and recover/reset
|
||
|
the state from such a buffer at a later date - this is used by mod_ssl for
|
||
|
apache to implement and SSL session ID cache".
|
||
|
|
||
|
13.6 Provide callback for cert verification
|
||
|
|
||
|
OpenSSL supports a callback for customised verification of the peer
|
||
|
certificate, but this does not seem to be exposed in the libcurl APIs. Could
|
||
|
it be? There is so much that could be done if it were.
|
||
|
|
||
|
13.7 Less memory massaging with Schannel
|
||
|
|
||
|
The Schannel backend does a lot of custom memory management we would rather
|
||
|
avoid: the repeated alloc + free in sends and the custom memory + realloc
|
||
|
system for encrypted and decrypted data. That should be avoided and reduced
|
||
|
for 1) efficiency and 2) safety.
|
||
|
|
||
|
13.8 Support DANE
|
||
|
|
||
|
DNS-Based Authentication of Named Entities (DANE) is a way to provide SSL
|
||
|
keys and certs over DNS using DNSSEC as an alternative to the CA model.
|
||
|
https://www.rfc-editor.org/rfc/rfc6698.txt
|
||
|
|
||
|
An initial patch was posted by Suresh Krishnaswamy on March 7th 2013
|
||
|
(https://curl.se/mail/lib-2013-03/0075.html) but it was a too simple
|
||
|
approach. See Daniel's comments:
|
||
|
https://curl.se/mail/lib-2013-03/0103.html . libunbound may be the
|
||
|
correct library to base this development on.
|
||
|
|
||
|
Björn Stenberg wrote a separate initial take on DANE that was never
|
||
|
completed.
|
||
|
|
||
|
13.9 TLS record padding
|
||
|
|
||
|
TLS (1.3) offers optional record padding and OpenSSL provides an API for it.
|
||
|
I could make sense for libcurl to offer this ability to applications to make
|
||
|
traffic patterns harder to figure out by network traffic observers.
|
||
|
|
||
|
See https://github.com/curl/curl/issues/5398
|
||
|
|
||
|
13.10 Support Authority Information Access certificate extension (AIA)
|
||
|
|
||
|
AIA can provide various things like CRLs but more importantly information
|
||
|
about intermediate CA certificates that can allow validation path to be
|
||
|
fulfilled when the HTTPS server does not itself provide them.
|
||
|
|
||
|
Since AIA is about downloading certs on demand to complete a TLS handshake,
|
||
|
it is probably a bit tricky to get done right.
|
||
|
|
||
|
See https://github.com/curl/curl/issues/2793
|
||
|
|
||
|
13.11 Some TLS options are not offered for HTTPS proxies
|
||
|
|
||
|
Some TLS related options to the command line tool and libcurl are only
|
||
|
provided for the server and not for HTTPS proxies. --proxy-tls-max,
|
||
|
--proxy-tlsv1.3, --proxy-curves and a few more.a
|
||
|
|
||
|
https://github.com/curl/curl/issues/12286
|
||
|
|
||
|
13.12 Reduce CA certificate bundle reparsing
|
||
|
|
||
|
When using the OpenSSL backend, curl will load and reparse the CA bundle at
|
||
|
the creation of the "SSL context" when it sets up a connection to do a TLS
|
||
|
handshake. A more effective way would be to somehow cache the CA bundle to
|
||
|
avoid it having to be repeatedly reloaded and reparsed.
|
||
|
|
||
|
See https://github.com/curl/curl/issues/9379
|
||
|
|
||
|
13.13 Make sure we forbid TLS 1.3 post-handshake authentication
|
||
|
|
||
|
RFC 8740 explains how using HTTP/2 must forbid the use of TLS 1.3
|
||
|
post-handshake authentication. We should make sure to live up to that.
|
||
|
|
||
|
See https://github.com/curl/curl/issues/5396
|
||
|
|
||
|
13.14 Support the clienthello extension
|
||
|
|
||
|
Certain stupid networks and middle boxes have a problem with SSL handshake
|
||
|
packets that are within a certain size range because how that sets some bits
|
||
|
that previously (in older TLS version) were not set. The clienthello
|
||
|
extension adds padding to avoid that size range.
|
||
|
|
||
|
https://datatracker.ietf.org/doc/html/rfc7685
|
||
|
https://github.com/curl/curl/issues/2299
|
||
|
|
||
|
13.15 Select signature algorithms
|
||
|
|
||
|
Consider adding an option or a way for users to select TLS signature
|
||
|
algorithm. The signature algorithms set by a client are used directly in the
|
||
|
supported signature algorithm in the client hello message.
|
||
|
|
||
|
https://github.com/curl/curl/issues/12982
|
||
|
|
||
|
14. GnuTLS
|
||
|
|
||
|
14.2 check connection
|
||
|
|
||
|
Add a way to check if the connection seems to be alive, to correspond to the
|
||
|
SSL_peak() way we use with OpenSSL.
|
||
|
|
||
|
15. Schannel
|
||
|
|
||
|
15.1 Extend support for client certificate authentication
|
||
|
|
||
|
The existing support for the -E/--cert and --key options could be
|
||
|
extended by supplying a custom certificate and key in PEM format, see:
|
||
|
- Getting a Certificate for Schannel
|
||
|
https://msdn.microsoft.com/en-us/library/windows/desktop/aa375447.aspx
|
||
|
|
||
|
15.2 Extend support for the --ciphers option
|
||
|
|
||
|
The existing support for the --ciphers option could be extended
|
||
|
by mapping the OpenSSL/GnuTLS cipher suites to the Schannel APIs, see
|
||
|
- Specifying Schannel Ciphers and Cipher Strengths
|
||
|
https://msdn.microsoft.com/en-us/library/windows/desktop/aa380161.aspx
|
||
|
|
||
|
15.4 Add option to allow abrupt server closure
|
||
|
|
||
|
libcurl w/schannel will error without a known termination point from the
|
||
|
server (such as length of transfer, or SSL "close notify" alert) to prevent
|
||
|
against a truncation attack. Really old servers may neglect to send any
|
||
|
termination point. An option could be added to ignore such abrupt closures.
|
||
|
|
||
|
https://github.com/curl/curl/issues/4427
|
||
|
|
||
|
16. SASL
|
||
|
|
||
|
16.1 Other authentication mechanisms
|
||
|
|
||
|
Add support for other authentication mechanisms such as OLP,
|
||
|
GSS-SPNEGO and others.
|
||
|
|
||
|
16.2 Add QOP support to GSSAPI authentication
|
||
|
|
||
|
Currently the GSSAPI authentication only supports the default QOP of auth
|
||
|
(Authentication), whilst Kerberos V5 supports both auth-int (Authentication
|
||
|
with integrity protection) and auth-conf (Authentication with integrity and
|
||
|
privacy protection).
|
||
|
|
||
|
|
||
|
17. SSH protocols
|
||
|
|
||
|
17.1 Multiplexing
|
||
|
|
||
|
SSH is a perfectly fine multiplexed protocols which would allow libcurl to do
|
||
|
multiple parallel transfers from the same host using the same connection,
|
||
|
much in the same spirit as HTTP/2 does. libcurl however does not take
|
||
|
advantage of that ability but will instead always create a new connection for
|
||
|
new transfers even if an existing connection already exists to the host.
|
||
|
|
||
|
To fix this, libcurl would have to detect an existing connection and "attach"
|
||
|
the new transfer to the existing one.
|
||
|
|
||
|
17.2 Handle growing SFTP files
|
||
|
|
||
|
The SFTP code in libcurl checks the file size *before* a transfer starts and
|
||
|
then proceeds to transfer exactly that amount of data. If the remote file
|
||
|
grows while the transfer is in progress libcurl will not notice and will not
|
||
|
adapt. The OpenSSH SFTP command line tool does and libcurl could also just
|
||
|
attempt to download more to see if there is more to get...
|
||
|
|
||
|
https://github.com/curl/curl/issues/4344
|
||
|
|
||
|
17.3 Read keys from ~/.ssh/id_ecdsa, id_ed25519
|
||
|
|
||
|
The libssh2 backend in curl is limited to only reading keys from id_rsa and
|
||
|
id_dsa, which makes it fail connecting to servers that use more modern key
|
||
|
types.
|
||
|
|
||
|
https://github.com/curl/curl/issues/8586
|
||
|
|
||
|
17.4 Support CURLOPT_PREQUOTE
|
||
|
|
||
|
The two other QUOTE options are supported for SFTP, but this was left out for
|
||
|
unknown reasons.
|
||
|
|
||
|
17.5 SSH over HTTPS proxy with more backends
|
||
|
|
||
|
The SSH based protocols SFTP and SCP did not work over HTTPS proxy at
|
||
|
all until PR https://github.com/curl/curl/pull/6021 brought the
|
||
|
functionality with the libssh2 backend. Presumably, this support
|
||
|
can/could be added for the other backends as well.
|
||
|
|
||
|
17.6 SFTP with SCP://
|
||
|
|
||
|
OpenSSH 9 switched their 'scp' tool to speak SFTP under the hood. Going
|
||
|
forward it might be worth having curl or libcurl attempt SFTP if SCP fails to
|
||
|
follow suite.
|
||
|
|
||
|
18. Command line tool
|
||
|
|
||
|
18.1 sync
|
||
|
|
||
|
"curl --sync http://example.com/feed[1-100].rss" or
|
||
|
"curl --sync http://example.net/{index,calendar,history}.html"
|
||
|
|
||
|
Downloads a range or set of URLs using the remote name, but only if the
|
||
|
remote file is newer than the local file. A Last-Modified HTTP date header
|
||
|
should also be used to set the mod date on the downloaded file.
|
||
|
|
||
|
18.2 glob posts
|
||
|
|
||
|
Globbing support for -d and -F, as in 'curl -d "name=foo[0-9]" URL'.
|
||
|
This is easily scripted though.
|
||
|
|
||
|
18.4 --proxycommand
|
||
|
|
||
|
Allow the user to make curl run a command and use its stdio to make requests
|
||
|
and not do any network connection by itself. Example:
|
||
|
|
||
|
curl --proxycommand 'ssh pi@raspberrypi.local -W 10.1.1.75 80' \
|
||
|
http://some/otherwise/unavailable/service.php
|
||
|
|
||
|
See https://github.com/curl/curl/issues/4941
|
||
|
|
||
|
18.5 UTF-8 filenames in Content-Disposition
|
||
|
|
||
|
RFC 6266 documents how UTF-8 names can be passed to a client in the
|
||
|
Content-Disposition header, and curl does not support this.
|
||
|
|
||
|
https://github.com/curl/curl/issues/1888
|
||
|
|
||
|
18.6 Option to make -Z merge lined based outputs on stdout
|
||
|
|
||
|
When a user requests multiple lined based files using -Z and sends them to
|
||
|
stdout, curl will not "merge" and send complete lines fine but may send
|
||
|
partial lines from several sources.
|
||
|
|
||
|
https://github.com/curl/curl/issues/5175
|
||
|
|
||
|
18.8 Consider convenience options for JSON and XML?
|
||
|
|
||
|
Could we add `--xml` or `--json` to add headers needed to call rest API:
|
||
|
|
||
|
`--xml` adds -H 'Content-Type: application/xml' -H "Accept: application/xml" and
|
||
|
`--json` adds -H 'Content-Type: application/json' -H "Accept: application/json"
|
||
|
|
||
|
Setting Content-Type when doing a GET or any other method without a body
|
||
|
would be a bit strange I think - so maybe only add CT for requests with body?
|
||
|
Maybe plain `--xml` and ` --json` are a bit too brief and generic. Maybe
|
||
|
`--http-json` etc?
|
||
|
|
||
|
See https://github.com/curl/curl/issues/5203
|
||
|
|
||
|
18.9 Choose the name of file in braces for complex URLs
|
||
|
|
||
|
When using braces to download a list of URLs and you use complicated names
|
||
|
in the list of alternatives, it could be handy to allow curl to use other
|
||
|
names when saving.
|
||
|
|
||
|
Consider a way to offer that. Possibly like
|
||
|
{partURL1:name1,partURL2:name2,partURL3:name3} where the name following the
|
||
|
colon is the output name.
|
||
|
|
||
|
See https://github.com/curl/curl/issues/221
|
||
|
|
||
|
18.10 improve how curl works in a windows console window
|
||
|
|
||
|
If you pull the scrollbar when transferring with curl in a Windows console
|
||
|
window, the transfer is interrupted and can get disconnected. This can
|
||
|
probably be improved. See https://github.com/curl/curl/issues/322
|
||
|
|
||
|
18.11 Windows: set attribute 'archive' for completed downloads
|
||
|
|
||
|
The archive bit (FILE_ATTRIBUTE_ARCHIVE, 0x20) separates files that shall be
|
||
|
backed up from those that are either not ready or have not changed.
|
||
|
|
||
|
Downloads in progress are neither ready to be backed up, nor should they be
|
||
|
opened by a different process. Only after a download has been completed it's
|
||
|
sensible to include it in any integer snapshot or backup of the system.
|
||
|
|
||
|
See https://github.com/curl/curl/issues/3354
|
||
|
|
||
|
18.12 keep running, read instructions from pipe/socket
|
||
|
|
||
|
Provide an option that makes curl not exit after the last URL (or even work
|
||
|
without a given URL), and then make it read instructions passed on a pipe or
|
||
|
over a socket to make further instructions so that a second subsequent curl
|
||
|
invoke can talk to the still running instance and ask for transfers to get
|
||
|
done, and thus maintain its connection pool, DNS cache and more.
|
||
|
|
||
|
18.13 Ratelimit or wait between serial requests
|
||
|
|
||
|
Consider a command line option that can make curl do multiple serial requests
|
||
|
slow, potentially with a (random) wait between transfers. There is also a
|
||
|
proposed set of standard HTTP headers to let servers let the client adapt to
|
||
|
its rate limits:
|
||
|
https://datatracker.ietf.org/doc/draft-ietf-httpapi-ratelimit-headers/
|
||
|
|
||
|
See https://github.com/curl/curl/issues/5406
|
||
|
|
||
|
18.14 --dry-run
|
||
|
|
||
|
A command line option that makes curl show exactly what it would do and send
|
||
|
if it would run for real.
|
||
|
|
||
|
See https://github.com/curl/curl/issues/5426
|
||
|
|
||
|
18.15 --retry should resume
|
||
|
|
||
|
When --retry is used and curl actually retries transfer, it should use the
|
||
|
already transferred data and do a resumed transfer for the rest (when
|
||
|
possible) so that it does not have to transfer the same data again that was
|
||
|
already transferred before the retry.
|
||
|
|
||
|
See https://github.com/curl/curl/issues/1084
|
||
|
|
||
|
18.16 send only part of --data
|
||
|
|
||
|
When the user only wants to send a small piece of the data provided with
|
||
|
--data or --data-binary, like when that data is a huge file, consider a way
|
||
|
to specify that curl should only send a piece of that. One suggested syntax
|
||
|
would be: "--data-binary @largefile.zip!1073741823-2147483647".
|
||
|
|
||
|
See https://github.com/curl/curl/issues/1200
|
||
|
|
||
|
18.17 consider file name from the redirected URL with -O ?
|
||
|
|
||
|
When a user gives a URL and uses -O, and curl follows a redirect to a new
|
||
|
URL, the file name is not extracted and used from the newly redirected-to URL
|
||
|
even if the new URL may have a much more sensible file name.
|
||
|
|
||
|
This is clearly documented and helps for security since there is no surprise
|
||
|
to users which file name that might get overwritten. But maybe a new option
|
||
|
could allow for this or maybe -J should imply such a treatment as well as -J
|
||
|
already allows for the server to decide what file name to use so it already
|
||
|
provides the "may overwrite any file" risk.
|
||
|
|
||
|
This is extra tricky if the original URL has no file name part at all since
|
||
|
then the current code path will error out with an error message, and we cannot
|
||
|
*know* already at that point if curl will be redirected to a URL that has a
|
||
|
file name...
|
||
|
|
||
|
See https://github.com/curl/curl/issues/1241
|
||
|
|
||
|
18.18 retry on network is unreachable
|
||
|
|
||
|
The --retry option retries transfers on "transient failures". We later added
|
||
|
--retry-connrefused to also retry for "connection refused" errors.
|
||
|
|
||
|
Suggestions have been brought to also allow retry on "network is unreachable"
|
||
|
errors and while totally reasonable, maybe we should consider a way to make
|
||
|
this more configurable than to add a new option for every new error people
|
||
|
want to retry for?
|
||
|
|
||
|
https://github.com/curl/curl/issues/1603
|
||
|
|
||
|
18.19 expand ~/ in config files
|
||
|
|
||
|
For example .curlrc could benefit from being able to do this.
|
||
|
|
||
|
See https://github.com/curl/curl/issues/2317
|
||
|
|
||
|
18.20 host name sections in config files
|
||
|
|
||
|
config files would be more powerful if they could set different
|
||
|
configurations depending on used URLs, host name or possibly origin. Then a
|
||
|
default .curlrc could a specific user-agent only when doing requests against
|
||
|
a certain site.
|
||
|
|
||
|
18.21 retry on the redirected-to URL
|
||
|
|
||
|
When curl is told to --retry a failed transfer and follows redirects, it
|
||
|
might get an HTTP 429 response from the redirected-to URL and not the
|
||
|
original one, which then could make curl decide to rather retry the transfer
|
||
|
on that URL only instead of the original operation to the original URL.
|
||
|
|
||
|
Perhaps extra emphasized if the original transfer is a large POST that
|
||
|
redirects to a separate GET, and that GET is what gets the 529
|
||
|
|
||
|
See https://github.com/curl/curl/issues/5462
|
||
|
|
||
|
18.23 Set the modification date on an uploaded file
|
||
|
|
||
|
For SFTP and possibly FTP, curl could offer an option to set the
|
||
|
modification time for the uploaded file.
|
||
|
|
||
|
See https://github.com/curl/curl/issues/5768
|
||
|
|
||
|
18.24 Use multiple parallel transfers for a single download
|
||
|
|
||
|
To enhance transfer speed, downloading a single URL can be split up into
|
||
|
multiple separate range downloads that get combined into a single final
|
||
|
result.
|
||
|
|
||
|
An ideal implementation would not use a specified number of parallel
|
||
|
transfers, but curl could:
|
||
|
- First start getting the full file as transfer A
|
||
|
- If after N seconds have passed and the transfer is expected to continue for
|
||
|
M seconds or more, add a new transfer (B) that asks for the second half of
|
||
|
A's content (and stop A at the middle).
|
||
|
- If splitting up the work improves the transfer rate, it could then be done
|
||
|
again. Then again, etc up to a limit.
|
||
|
|
||
|
This way, if transfer B fails (because Range: is not supported) it will let
|
||
|
transfer A remain the single one. N and M could be set to some sensible
|
||
|
defaults.
|
||
|
|
||
|
See https://github.com/curl/curl/issues/5774
|
||
|
|
||
|
18.25 Prevent terminal injection when writing to terminal
|
||
|
|
||
|
curl could offer an option to make escape sequence either non-functional or
|
||
|
avoid cursor moves or similar to reduce the risk of a user getting tricked by
|
||
|
clever tricks.
|
||
|
|
||
|
See https://github.com/curl/curl/issues/6150
|
||
|
|
||
|
18.26 Custom progress meter update interval
|
||
|
|
||
|
Users who are for example doing large downloads in CI or remote setups might
|
||
|
want the occasional progress meter update to see that the transfer is
|
||
|
progressing and has not stuck, but they may not appreciate the
|
||
|
many-times-a-second frequency curl can end up doing it with now.
|
||
|
|
||
|
18.27 -J and -O with %-encoded file names
|
||
|
|
||
|
-J/--remote-header-name does not decode %-encoded file names. RFC 6266 details
|
||
|
how it should be done. The can of worm is basically that we have no charset
|
||
|
handling in curl and ascii >=128 is a challenge for us. Not to mention that
|
||
|
decoding also means that we need to check for nastiness that is attempted,
|
||
|
like "../" sequences and the like. Probably everything to the left of any
|
||
|
embedded slashes should be cut off.
|
||
|
https://curl.se/bug/view.cgi?id=1294
|
||
|
|
||
|
-O also does not decode %-encoded names, and while it has even less
|
||
|
information about the charset involved the process is similar to the -J case.
|
||
|
|
||
|
Note that we will not add decoding to -O without the user asking for it with
|
||
|
some other means as well, since -O has always been documented to use the name
|
||
|
exactly as specified in the URL.
|
||
|
|
||
|
18.28 -J with -C -
|
||
|
|
||
|
When using -J (with -O), automatically resumed downloading together with "-C
|
||
|
-" fails. Without -J the same command line works. This happens because the
|
||
|
resume logic is worked out before the target file name (and thus its
|
||
|
pre-transfer size) has been figured out. This can be improved.
|
||
|
|
||
|
https://curl.se/bug/view.cgi?id=1169
|
||
|
|
||
|
18.29 --retry and transfer timeouts
|
||
|
|
||
|
If using --retry and the transfer timeouts (possibly due to using -m or
|
||
|
-y/-Y) the next attempt does not resume the transfer properly from what was
|
||
|
downloaded in the previous attempt but will truncate and restart at the
|
||
|
original position where it was at before the previous failed attempt. See
|
||
|
https://curl.se/mail/lib-2008-01/0080.html and Mandriva bug report
|
||
|
https://qa.mandriva.com/show_bug.cgi?id=22565
|
||
|
|
||
|
|
||
|
19. Build
|
||
|
|
||
|
19.2 Enable PIE and RELRO by default
|
||
|
|
||
|
Especially when having programs that execute curl via the command line, PIE
|
||
|
renders the exploitation of memory corruption vulnerabilities a lot more
|
||
|
difficult. This can be attributed to the additional information leaks being
|
||
|
required to conduct a successful attack. RELRO, on the other hand, masks
|
||
|
different binary sections like the GOT as read-only and thus kills a handful
|
||
|
of techniques that come in handy when attackers are able to arbitrarily
|
||
|
overwrite memory. A few tests showed that enabling these features had close
|
||
|
to no impact, neither on the performance nor on the general functionality of
|
||
|
curl.
|
||
|
|
||
|
19.3 Do not use GNU libtool on OpenBSD
|
||
|
When compiling curl on OpenBSD with "--enable-debug" it will give linking
|
||
|
errors when you use GNU libtool. This can be fixed by using the libtool
|
||
|
provided by OpenBSD itself. However for this the user always needs to invoke
|
||
|
make with "LIBTOOL=/usr/bin/libtool". It would be nice if the script could
|
||
|
have some magic to detect if this system is an OpenBSD host and then use the
|
||
|
OpenBSD libtool instead.
|
||
|
|
||
|
See https://github.com/curl/curl/issues/5862
|
||
|
|
||
|
19.4 Package curl for Windows in a signed installer
|
||
|
|
||
|
See https://github.com/curl/curl/issues/5424
|
||
|
|
||
|
19.5 make configure use --cache-file more and better
|
||
|
|
||
|
The configure script can be improved to cache more values so that repeated
|
||
|
invokes run much faster.
|
||
|
|
||
|
See https://github.com/curl/curl/issues/7753
|
||
|
|
||
|
19.6 build curl with Windows Unicode support
|
||
|
|
||
|
The user wants an easier way to tell autotools to build curl with Windows
|
||
|
Unicode support, like ./configure --enable-windows-unicode
|
||
|
|
||
|
See https://github.com/curl/curl/issues/7229
|
||
|
|
||
|
20. Test suite
|
||
|
|
||
|
20.1 SSL tunnel
|
||
|
|
||
|
Make our own version of stunnel for simple port forwarding to enable HTTPS
|
||
|
and FTP-SSL tests without the stunnel dependency, and it could allow us to
|
||
|
provide test tools built with either OpenSSL or GnuTLS
|
||
|
|
||
|
20.2 nicer lacking perl message
|
||
|
|
||
|
If perl was not found by the configure script, do not attempt to run the tests
|
||
|
but explain something nice why it does not.
|
||
|
|
||
|
20.3 more protocols supported
|
||
|
|
||
|
Extend the test suite to include more protocols. The telnet could just do FTP
|
||
|
or http operations (for which we have test servers).
|
||
|
|
||
|
20.4 more platforms supported
|
||
|
|
||
|
Make the test suite work on more platforms. OpenBSD and Mac OS. Remove
|
||
|
fork()s and it should become even more portable.
|
||
|
|
||
|
20.5 Add support for concurrent connections
|
||
|
|
||
|
Tests 836, 882 and 938 were designed to verify that separate connections are
|
||
|
not used when using different login credentials in protocols that should not
|
||
|
reuse a connection under such circumstances.
|
||
|
|
||
|
Unfortunately, ftpserver.pl does not appear to support multiple concurrent
|
||
|
connections. The read while() loop seems to loop until it receives a
|
||
|
disconnect from the client, where it then enters the waiting for connections
|
||
|
loop. When the client opens a second connection to the server, the first
|
||
|
connection has not been dropped (unless it has been forced - which we
|
||
|
should not do in these tests) and thus the wait for connections loop is never
|
||
|
entered to receive the second connection.
|
||
|
|
||
|
20.6 Use the RFC 6265 test suite
|
||
|
|
||
|
A test suite made for HTTP cookies (RFC 6265) by Adam Barth is available at
|
||
|
https://github.com/abarth/http-state/tree/master/tests
|
||
|
|
||
|
It'd be really awesome if someone would write a script/setup that would run
|
||
|
curl with that test suite and detect deviances. Ideally, that would even be
|
||
|
incorporated into our regular test suite.
|
||
|
|
||
|
20.7 Support LD_PRELOAD on macOS
|
||
|
|
||
|
LD_RELOAD does not work on macOS, but there are tests which require it to run
|
||
|
properly. Look into making the preload support in runtests.pl portable such
|
||
|
that it uses DYLD_INSERT_LIBRARIES on macOS.
|
||
|
|
||
|
20.8 Run web-platform-tests URL tests
|
||
|
|
||
|
Run web-platform-tests URL tests and compare results with browsers on wpt.fyi
|
||
|
|
||
|
It would help us find issues to fix and help us document where our parser
|
||
|
differs from the WHATWG URL spec parsers.
|
||
|
|
||
|
See https://github.com/curl/curl/issues/4477
|
||
|
|
||
|
21. MQTT
|
||
|
|
||
|
21.1 Support rate-limiting
|
||
|
|
||
|
The rate-limiting logic is done in the PERFORMING state in multi.c but MQTT
|
||
|
is not (yet) implemented to use that.
|
||
|
|
||
|
22. TFTP
|
||
|
|
||
|
22.1 TFTP doesn't convert LF to CRLF for mode=netascii
|
||
|
|
||
|
RFC 3617 defines that an TFTP transfer can be done using "netascii"
|
||
|
mode. curl does not support extracting that mode from the URL nor does it treat
|
||
|
such transfers specifically. It should probably do LF to CRLF translations
|
||
|
for them.
|
||
|
|
||
|
See https://github.com/curl/curl/issues/12655
|