6-Jun-96 20:48:51-GMT,1144;000000000001 Received: (from fdc@localhost) by watsun.cc.columbia.edu (8.7.5/8.7.3) id QAA22229; Thu, 6 Jun 1996 16:48:51 -0400 (EDT) Date: Thu, 6 Jun 96 16:48:51 EDT From: Frank da Cruz To: Jeffrey Altman Subject: Autoserver Message-ID: This is pretty gross, but... I realized that kstart() could just as easily recognize an I packet as an S packet, so I hacked the code into chkspkt() in ckcfn2.c and kstart() in ckucon.c. So now, you can tell the remote Kermit to "get blah", and the local Kermit goes into server mode and sends "blah". Neat, eh? The only problem is, it stays in server mode after it finishes sending. This can be easily remedied by making a built-in macro that does the GET and then a FINISH, but I'll have to think about what it should be called... In case you want to put the code in K95 (I'm still not sure it's a good idea), just adapt its version of kstart, and the code in ckoco?.c that calls kstart, like I did in ckucon.c -- just a couple lines. Hmmm... Wait a minute, I'm getting an idea... Back in a sec. 6-Jun-96 22:23:17-GMT,1648;000000000001 Received: (from fdc@localhost) by watsun.cc.columbia.edu (8.7.5/8.7.3) id SAA03258; Thu, 6 Jun 1996 18:23:14 -0400 (EDT) Date: Thu, 6 Jun 96 18:23:14 EDT From: Frank da Cruz To: John Chandler Cc: Joe Doupnik , Jeffrey Altman Subject: Another tasteless innovation Message-ID: If the terminal emulator can be made to recognize S packets (as we have done in K95), and if it sees one, to go into RECEIVE mode automatically, then why not also have it recognize I packets and go into SERVER mode automatically? (Wait, don't answer yet :-) The benefit is that the user, while CONNECTed, could initiate file transfers not only by saying "send blah", but also with "get blah", or even "remote directory > blah". The obvious fly in this ointment is that once the terminal emulator enters Kermit server mode, how does it exit? Easy: in the terminal emulator, we set Yet Another Global Flag that says that server mode was activated by the terminal emulator recognizing an I packet. When the protocol module finishes whatever it asked to do and goes back to server command wait, it checks this flag and, if set, unsets it and returns, otherwise waits for more packets. Works great, *almost* makes APC obsolete. But another fly waits in the wings -- as I recall, Kermit-370 is somewhat abstemious with I packets, figuring it need not resend them on each transaction, if already done once on the same connection. So John, maybe you could consider sending an I packet every time? - Frank 7-Jun-96 3:01:04-GMT,2176;000000000011 Received: from CUVMB.CC.COLUMBIA.EDU (cuvmb.cc.columbia.edu [128.59.40.129]) by watsun.cc.columbia.edu (8.7.5/8.7.3) with SMTP id XAA22011; Thu, 6 Jun 1996 23:01:03 -0400 (EDT) Received: from CUVMB.CC.COLUMBIA.EDU by CUVMB.CC.COLUMBIA.EDU (IBM VM SMTP V2R1) with BSMTP id 7604; Thu, 06 Jun 96 23:00:32 EDT Date: Thu, 1996 Jun 6 22:20 EDT From: (John F. Chandler)JCHBN@CUVMB.CC.COLUMBIA.EDU To: (Frank da Cruz)fdc@watsun.CC.COLUMBIA.EDU, (Joe Doupnik)JRD@cc.usu.edu, (Jeffrey Altman)jaltman@watsun.CC.COLUMBIA.EDU Subject: Re: Another tasteless innovation In-reply-to: fdc@watsun.cc.columbia.edu message of Thu, 6 Jun 96 18:23:14 EDT Message-id: > If the terminal emulator can be made to recognize S packets Just one or two quick thoughts about this. In Doomsday mode, the packet does not start with a control character, so the possibility of accidental spurious packets is a bit less remote. How carefully does the program check to make sure the apparent S packet is real? Does it, for example, insist on a "sane" length and a valid checksum? Is the S-packet finder actually known to work in Doomsday mode? Just curious... > But another fly waits in the wings -- as I recall, Kermit-370 is somewhat > abstemious with I packets, figuring it need not resend them on each > transaction, if already done once on the same connection. It's not quite that abstemious. It recognizes that any issuance of the SET subcommand may result in a change in the INIT parameters, and so it makes a note to itself to send another I-packet before the next server interaction. This mechanism is triggered even by issuing "SET ?" or "SET D OF", so it is easy to force an I-packet. > So John, maybe > you could consider sending an I packet every time? It occurs to me that a "remote" Kermit, when given a subcommand geared toward a "local" (client) Kermit, could assume that someone is going to magically detect the coming packet. K-370 could check to see if it is "remote" and, if so, override the I-packet suppressor. John 7-Jun-96 13:33:01-GMT,2326;000000000001 Received: (from fdc@localhost) by watsun.cc.columbia.edu (8.7.5/8.7.3) id JAA06563; Fri, 7 Jun 1996 09:32:56 -0400 (EDT) Date: Fri, 7 Jun 96 9:32:56 EDT From: Frank da Cruz To: JCHBN@CUVMB.CC.COLUMBIA.EDU Cc: JRD@cc.usu.edu, jaltman@watsun.CC.COLUMBIA.EDU Subject: Re: Another tasteless innovation In-Reply-To: Your message of Thu, 1996 Jun 6 22:20 EDT Message-ID: > Just one or two quick thoughts about this. In Doomsday mode, the packet > does not start with a control character, so the possibility of accidental > spurious packets is a bit less remote. How carefully does the program > check to make sure the apparent S packet is real? Does it, for example, > insist on a "sane" length and a valid checksum? > Yes to both. It looks for the current start character, it checks the perceived length as well as the length field, the sequence number, the type, and the checksum. > Is the S-packet finder > actually known to work in Doomsday mode? Just curious... > Currently no, but it could. It presently depends on finding the packet-end character, but that could be changed at a slight cost in efficiency. > > But another fly waits in the wings -- as I recall, Kermit-370 is somewhat > > abstemious with I packets, figuring it need not resend them on each > > transaction, if already done once on the same connection. > > It's not quite that abstemious. It recognizes that any issuance of the > SET subcommand may result in a change in the INIT parameters, and so it > makes a note to itself to send another I-packet before the next server > interaction. This mechanism is triggered even by issuing "SET ?" or > "SET D OF", so it is easy to force an I-packet. > > > So John, maybe > > you could consider sending an I packet every time? > > It occurs to me that a "remote" Kermit, when given a subcommand geared > toward a "local" (client) Kermit, could assume that someone is going > to magically detect the coming packet. K-370 could check to see if > it is "remote" and, if so, override the I-packet suppressor. > That sounds good. If you make this change, maybe you could bump up K-370's minor edit number, so have something firm to refer to when describing combinations that can do these new automations? Thanks! - Frank