MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
url
Recherche

Solver can code that for you

lundi 3 février 2025, 10:00 , par InfoWorld
Solver can code that for you
In the area of programming automation, sometimes it’s better to fix a software problem by first analyzing the whole repository rather than guessing about what files are relevant to the problem. That’s particularly relevant when you are managing your project on GitHub or one of its competitors, because problem tickets can be part of the repo.

If you can automate the entire fix, even better. Essentially, wholesale is better than retail. Why not use AI to fix entire problems rather than limiting it to completing a line of code?

If you’re intimately familiar with your code and trying to solve a bug yourself, then you might be able to jump right to the offending methods. But that’s the exception, rather than the rule, and being the human expert doesn’t usually scale.

There are currently at least three commercial products addressing repo-level automated software engineering using large language models: Solver, GitHub Copilot Workspace, and Amazon Q Developer Agent. There are also several research projects in the space, for example OpenHands.

SWE-bench tests and ranks the efficacy of such efforts. Solver is currently (as of this writing) ranked #16 on the SWE-bench Verified leaderboard, with 50% of the benchmark problems resolved. As you can see yourself by viewing the leaderboard, rankings can change on a daily basis as new submissions come in.

What is Solver?

The people behind Solver describe it as a “cloud-native API that transforms software coding from a manual craft into a scalable utility. Unlike supervised AI coding assistants that offer inline suggestions focused on code completion, Solver empowers developers to offload time-consuming and tedious programming tasks wholesale.” They also talk a lot about “elastic engineering” and “repository-based reasoning.”

The API won’t be available to customers until later this year, but you can request early access at the website. The site looks nearly empty when you start, but it quickly fills up when you have Solver working on a problem. You can work on your own repos, or work on three demo repos supplied by the company. Suggested tasks are supplied for the demo repos, including bug fixes, security scans and fixes, functionality and speed improvements, refactoring, improving tests and documentation, and doing maintenance.

IDG

Solver example

I tried asking Solver to fix an example bug in the requests repo. Here’s the transcript. The TL;DR version of this section is that Solver does go off on tangents (like every AI code generation model I know of) but can recover from them, at least in Python, which is currently the only language in which it can run test code.

My query:

Requests that include multiple path separators (e.g. //) are now causing failures. This may be related to Authentication.

Solver:

Found 8 relevant files

‎src/requests/sessions.py

src/requests/adapters.py

src/requests/models.py

tests/test_adapters.py

tests/test_requests.py

src/requests/auth.py

tests/test_utils.py

src/requests/utils.py

Let’s first examine the sessions.py file since it appears most relevant to handling URLs with multiple path separators and authentication.

Reading src/requests/sessions.py

Let’s check how authentication stripping is handled since this seems to be related to both path separators and authentication. Let’s look at the should_strip_auth method.

Searching for should_strip_auth

Let’s examine the should_strip_auth method to understand how it handles URL parsing and authentication stripping.

Reading src/requests/sessions.py

Let’s look at how URLs are actually prepared, since the issue seems to be with path separators. Let’s check the PreparedRequest class in models.py.

Reading src/requests/models.py

Let’s see the rest of the prepare_url method since it’s responsible for URL preparation and might be where the path separator issue is occurring.

Reading src/requests/models.py

Let’s check how parse_url and requote_uri functions work since they’re key to URL processing. They should be in utils.py.

Searching for def parse_url

Let’s check where parse_url is imported from, as it might be in a different module.

Searching repo for parse_url

Let’s look at the URL handling in utils.py around line 1004 where there seems to be special handling for netloc and path.

Reading src/requests/utils.py

Let’s check the requote_uri function since it’s used in the final URL preparation and might be related to how path separators are handled.

Searching for def requote_uri

Let’s look at the requote_uri function to see how it handles URL encoding.

Reading src/requests/utils.py

Looking at the code, I believe I understand the issue. When URLs contain multiple path separators (//), they’re being handled in multiple places:

In sessions.py, there’s special handling for URLs starting with // (scheme-relative URLs)

In models.py‘s prepare_url, the URL is parsed and reconstructed

In utils.py, requote_uri handles URL encoding but preserves forward slashes as safe characters

The issue likely occurs when authentication is present and the URL contains multiple slashes. Let’s check how the URL is initially processed in models.py‘s prepare_url method.

Reading src/requests/models.py

I believe I’ve identified the issue. The problem occurs in the URL preparation process when there are multiple path separators and authentication is present. Here’s what’s happening:

When a URL contains multiple slashes (e.g. and authentication is present, the URL parsing and reconstruction process in prepare_url is not properly preserving the multiple slashes.

The key issue is in the URL reconstruction process where urlunparse is used with the parsed components, followed by requote_uri. While requote_uri preserves single slashes (they’re in the safe characters list), the initial parsing and reconstruction may be normalizing multiple slashes into single ones.

Me:

Finally! This sounds promising.

Solver:

Let’s check if there’s a test case for this scenario.

Searching for test_path_with_multiple_slashes

Let’s check for any existing tests that handle URLs with double slashes in general.

Searching for double slash

Based on my analysis, I can now submit a clear explanation of the issue and suggested fix. The problem occurs in the URL preparation process and needs to be addressed in the prepare_url method of PreparedRequest.

Solver finished

At this point I pressed the continue solving button to get Solver to continue along these lines.

Me:

You’re on the right track; please continue.

Solver:

Let’s add a test case first to verify the issue with multiple slashes and authentication.

Let’s check the existing test file first to see where we should add our test.

Reading tests/test_requests.py

Let’s add our test case to the TestRequests class. First, let’s find a good location near other URL-related tests.

Searching for test_path

Perfect, let’s add our test case right after the test_path_is_not_double_encoded test.

IDG

Now let’s modify the prepare_url method in models.py to preserve multiple slashes. The issue is likely in how the path is handled before being passed to urlunparse.

IDG

Note that a null is a suspect choice of marker.

Solver:

Let’s run the tests to verify our changes fix the issue. We’ll use pytest to run just our new test case.

$ python -m pytest tests/test_requests.py -k test_path_with_multiple_slashes -v
============================= test session starts ==============================
platform linux -- Python 3.9.20, pytest-8.3.3, pluggy-1.5.0 -- /usr/local/bin/python
cachedir:.pytest_cache
rootdir: /repo
configfile: pyproject.toml
plugins: cov-5.0.0, httpbin-2.1.0
collecting... collected 330 items / 329 deselected / 1 selected

tests/test_requests.py::TestRequests::test_path_with_multiple_slashes FAILED [100%]

=================================== FAILURES ===================================
_________________ TestRequests.test_path_with_multiple_slashes _________________

self =

def test_path_with_multiple_slashes(self):
'''Test that URLs with multiple slashes are preserved.'''
url = '
request = requests.Request('GET', url).prepare()
> assert request.url == url
E AssertionError: assert ' == '
E
E -
E? ^ ^^ ^^
E +
E? ^^^ ^^^ ^^^

tests/test_requests.py:151: AssertionError
=========================== short test summary info ============================
FAILED tests/test_requests.py::TestRequests::test_path_with_multiple_slashes
====================== 1 failed, 329 deselected in 0.75s =======================
Exit Code: 1

The test failure shows that our null byte replacement approach isn’t working correctly – the null bytes are getting URL encoded. Let’s try a different approach using a more URL-safe temporary marker.

IDG

We need to add the re import at the top of the file.

Let’s check the current imports first.

Reading src/requests/models.py

I see we can add re to the existing imports. Let’s add it with the other standard library imports at the top.

IDG

Now let’s run the test again to see if our new approach works.

$ python -m pytest tests/test_requests.py -k test_path_with_multiple_slashes -v
============================= test session starts ==============================
platform linux -- Python 3.9.20, pytest-8.3.3, pluggy-1.5.0 -- /usr/local/bin/python
cachedir:.pytest_cache
rootdir: /repo
configfile: pyproject.toml
plugins: httpbin-2.1.0, cov-5.0.0
collecting... collected 330 items / 329 deselected / 1 selected

tests/test_requests.py::TestRequests::test_path_with_multiple_slashes FAILED [100%]

=================================== FAILURES ===================================
_________________ TestRequests.test_path_with_multiple_slashes _________________

self =

def test_path_with_multiple_slashes(self):
'''Test that URLs with multiple slashes are preserved.'''
url = '
> request = requests.Request('GET', url).prepare()

tests/test_requests.py:150:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/requests/models.py:299: in prepare
p.prepare(
src/requests/models.py:368: in prepare
self.prepare_url(url, params)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =, url = '
params = {}

def prepare_url(self, url, params):
'''Prepares the given HTTP URL.'''
#: Accept objects that have string representations.
#: We're unable to blindly call unicode/str functions
#: as this will include the bytestring indicator (b'')
#: on python 3.x.
#: https://github.com/psf/requests/pull/2238
if isinstance(url, bytes):
url = url.decode('utf8')
else:
url = str(url)

# Remove leading whitespaces from url
url = url.lstrip()

# Don't do any URL preparation for non-HTTP schemes like `mailto`,
# `data` etc to work around exceptions from `url_parse`, which
# handles RFC 3986 only.
if ':' in url and not url.lower().startswith('http'):
self.url = url
return

# Support for unicode domain names and paths.
try:
scheme, auth, host, port, path, query, fragment = parse_url(url)
except LocationParseError as e:
raise InvalidURL(*e.args)

if not scheme:
raise MissingSchema(
f'Invalid URL {url!r}: No scheme supplied. '
f'Perhaps you meant https://{url}?'
)

if not host:
raise InvalidURL(f'Invalid URL {url!r}: No host supplied')

# In general, we want to try IDNA encoding the hostname if the string contains
# non-ASCII characters. This allows users to automatically get the correct IDNA
# behaviour. For strings containing only ASCII characters, we need to also verify
# it doesn't start with a wildcard (*), before allowing the unencoded hostname.
if not unicode_is_ascii(host):
try:
host = self._get_idna_encoded_host(host)
except UnicodeError:
raise InvalidURL('URL has an invalid label.')
elif host.startswith(('*', '.')):
raise InvalidURL('URL has an invalid label.')

# Carefully reconstruct the network location
netloc = auth or ''
if netloc:
netloc += '@'
netloc += host
if port:
netloc += f':{port}'

# Bare domains aren't valid URLs.
if not path:
path = '/'
else:
# Preserve multiple consecutive slashes in the path
original_path = path
path = re.sub('//+', '/', path) # Normalize to single slashes for parsing

if isinstance(params, (str, bytes)):
params = to_native_string(params)

enc_params = self._encode_params(params)
if enc_params:
if query:
query = f'{query}&{enc_params}'
else:
query = enc_params

url = requote_uri(urlunparse([scheme, netloc, path, None, query, fragment]))

# Restore multiple slashes if they were present in the original path
if path!= '/' and 'original_path' in locals():
# Find the path portion in the final URL and replace it
> url_parts = list(urlparse(url))
E NameError: name 'urlparse' is not defined

src/requests/models.py:490: NameError
=========================== short test summary info ============================
FAILED tests/test_requests.py::TestRequests::test_path_with_multiple_slashes
====================== 1 failed, 329 deselected in 0.55s =======================
Exit Code: 1

Ah, we need to use the imported urlparse. Let’s fix the imports and try a simpler approach.

IDG

Let’s run the test again to see if this simpler approach works.

$ python -m pytest tests/test_requests.py -k test_path_with_multiple_slashes -v
============================= test session starts ==============================
platform linux -- Python 3.9.20, pytest-8.3.3, pluggy-1.5.0 -- /usr/local/bin/python
cachedir:.pytest_cache
rootdir: /repo
configfile: pyproject.toml
plugins: httpbin-2.1.0, cov-5.0.0
collecting... collected 330 items / 329 deselected / 1 selected

tests/test_requests.py::TestRequests::test_path_with_multiple_slashes PASSED [100%]

====================== 1 passed, 329 deselected in 0.62s =======================
Exit Code: 0

The test passes. The simpler solution works because we’re letting the URL components pass through without any normalization. Let’s submit our changes which include:

Added a test case to verify multiple slashes are preserved in URLs with and without authentication

Modified prepare_url to preserve the exact path structure without normalization

Added a comment explaining the preservation of multiple slashes

Solver finished

Discussion with a founder

Somewhere in the middle of the session shown in the previous section, you’ll see that I pressed the continue solving button and told Solver, “You’re on the right track; please continue.”

I wondered why Solver had stopped here to wait for user input, so I sent in an issue report and asked, “Why does Solver stop after identifying the problem? Yes, it did continue when I pressed the appropriate button. This is a question, not a bug report.”

Surprisingly quickly, I got a considered response from Mark Gabel, one of the founders of Solver:

Hi, Martin! It’s great to hear from you and I’m glad you’ve had the chance to start playing with Solver.

Your question was:

Why does Solver stop after identifying the problem? Yes, it did continue when I pressed the appropriate button. This is a question, not a bug report.

This is a very insightful thing to point out — most users would have just hit the “please continue” and forgotten about it.

You’ve actually hit on a point that’s relevant to all kinds of AI agents, Solver included: Just how “trigger happy” should it be? By “trigger happy,” I mean its willingness to go ahead and actually do work — making code changes, etc.

It’s something we have to be careful with and there tends to be a delicate balance. If we train Solver to be too trigger happy, then it may run off on its own and make dozens of code changes based on a hunch or a guess, even given a very ambiguous task.

However, if we overcorrect to the opposite, it becomes way too shy, often not wanting to do anything on its own without checking with the developer for permission and confirmation of the right approach. And that’s really at odds with our vision of “elastic engineering,” in which you’re truly delegating tasks.

It’s something we tweak constantly, and you actually caught a case where Solver was probably a bit too shy.

One thing I’ve proposed — but we’re debating because it’s somewhat complex — is actually giving the developer control over this via some kind of slider or knob. Imagine the “lowest” setting being, basically, “Be extra cautious; check with me before you do anything; and ask lots of questions” and the “highest” setting being equivalent to “Just go for it; take a reasonable guess at any information you’re missing.” And the values you could adjust to in between would be blends of the two.

Thank you again for pointing out this issue and being thoughtful about it. And please don’t hesitate to reach out again — every email you send or issue you report is 100% visible to my team.

Best,Mark

Automating software development tasks

While it’s not yet finished — I expect integrations with popular IDEs sooner rather than later — Solver demonstrably raises the bar on automated software engineering. While it has competition, it’s worth testing now, especially if your projects are in Python, the only language in which it can currently run code by itself.
https://www.infoworld.com/article/3617560/solver-can-code-that-for-you.html

Voir aussi

News copyright owned by their original publishers | Copyright © 2004 - 2025 Zicos / 440Network
Date Actuelle
mar. 4 févr. - 15:06 CET