-
Notifications
You must be signed in to change notification settings - Fork 21
Description
Hi !
I have the following question/issue: go panics panic: lost connection to pod even after port-forwards were closed.
github.com/anthhub/forwarder v1.1.0
Here is simplified code I use. The main function actually is executed in Go routine (it is some e2e tests running in k8s)
package main
import (
"context"
"fmt"
"time"
"github.com/anthhub/forwarder"
)
func PortForward(options []*forwarder.Option, config string) (*forwarder.Result, error) {
ret, err := forwarder.WithForwarders(context.Background(), options, config)
if err != nil {
fmt.Printf("Error occurred while configuring port-forwarding with config (%s) and options %v", config, &options)
ret.Close()
return nil, err
}
ports, err := ret.Ready()
if err != nil {
fmt.Printf("Errror occurred while waiting for port-forwarding to be ready with config (%s) and options %v", config, options)
ret.Close()
return nil, err
}
fmt.Printf("Port-forwarding established with ports: %+v\n", ports)
fmt.Printf("Make sure to close forwarding via Close()")
return ret, nil
}
func main() {
const httpsPort int = 443
const certRenewalTimeoutSeconds = 120
localPortForwardPort := 26842
kubeConfig := "~/.kube/config"
type test struct {
Namespace string
Source string
RemotePort int
}
tests := []test{
{
Namespace: "default",
Source: "svc/my-service-1",
RemotePort: 8000,
},
{
Namespace: "default",
Source: "svc/my-service-2",
RemotePort: 8443,
},
}
for _, test := range tests {
var portForward *forwarder.Result
portForwardOptions := []*forwarder.Option{
{
LocalPort: localPortForwardPort,
RemotePort: test.RemotePort,
Source: test.Source,
Namespace: test.Namespace,
},
}
portForward, err := PortForward(portForwardOptions, kubeConfig)
if err != nil {
fmt.Println(err)
}
portForwardOpenedAt := time.Now().Unix()
fmt.Printf("Portforward opened to %d at %d", localPortForwardPort, portForwardOpenedAt)
close := func(rst *forwarder.Result, port int, timestamp int64) {
fmt.Printf("Closing portforward %v on port %d opened at %d", rst, port, timestamp)
rst.Close()
}
defer close(portForward, localPortForwardPort, portForwardOpenedAt)
// do some tests here, this code returns some value
localPortForwardPort++
fmt.Printf("Port incremented. Next port-forwarding will be opened at %d. They all will be closed on return from this func", localPortForwardPort)
}
}
It all works great, in logs after this particular test is concluded I see
2024/06/19 10:30:42 Closing portforward &{0x1041d7c30 0x1041d7ab0 0x1041d79c0} on port 26843 opened at 1718782197
2024/06/19 10:30:42 Closing portforward &{0x1041d7c30 0x1041d7ab0 0x1041d79c0} on port 26842 opened at 1718782153
However down the road, long after all ports were supposedly closed, in the next tests Go panics with
panic: lost connection to pod
goroutine 3903 [running]:
github.com/anthhub/forwarder.portForwardAPod.func1()
/vendor/github.com/anthhub/forwarder/forwarder.go:164 +0x2d
created by github.com/anthhub/forwarder.portForwardAPod in goroutine 3902
/vendor/github.com/anthhub/forwarder/forwarder.go:162 +0x419
In my initial implementation I was calling Close() on each iteration manually (not through defer), but on the second iteration port from the first one was still in use somehow, so I resorted to use separate port in each iteration. Now I tried with defer and ports are closed on return (as they should) , but still this issue persists.
Any thoughts will be appreciated.