Статический шейпинг с помощью RADIUS-атрибутов
Статический шейпинг с помощью RADIUS-атрибутов
Здравствуйте. Читал статью Настройка RADIUS атрибутов на примере ограничения скорости VPN-соединений.
Проблема следующая: необходимо создать несколько безлимитных тарифных планов с ограниченной скоростью. Будет несколько удалённых NAS-серверов как на базе маршрутизаторов CISCO, так и на PC под управлением OS FreeBSD(в качестве pptp-сервера - poptop).
Все говорят что можно с лёгкостью указать унифицированные RADIUS атрибуты в UTM на статическое ограничение скорости, которые будет понимать как и pptp-сервер CISCO, так UNIX. Если так, то ткните пожалуйста какие конкретно RADIUS атрибуты на статическое ограничение скорости нужно задать. С CISCO понятно, а вот с poptop - нет.
Проблема следующая: необходимо создать несколько безлимитных тарифных планов с ограниченной скоростью. Будет несколько удалённых NAS-серверов как на базе маршрутизаторов CISCO, так и на PC под управлением OS FreeBSD(в качестве pptp-сервера - poptop).
Все говорят что можно с лёгкостью указать унифицированные RADIUS атрибуты в UTM на статическое ограничение скорости, которые будет понимать как и pptp-сервер CISCO, так UNIX. Если так, то ткните пожалуйста какие конкретно RADIUS атрибуты на статическое ограничение скорости нужно задать. С CISCO понятно, а вот с poptop - нет.
Разобрался, устанавливаю MPD, но с ним жуткие проблемы.
Что-то вот разобраться не могу никак сам. Настроил mpd, при подключении 734-ая ошибка.
В mpd.log:
Соответсвенно 10.50.10.200 - узел с которого подключаюсь, 10.50.10.1 - VPN-сервер, 172.16.0.254 - виртуальный адрес pptp-сервера.
mpd.conf:
mpd.links:
kldstat:
Что-то вот разобраться не могу никак сам. Настроил mpd, при подключении 734-ая ошибка.
В mpd.log:
Код: Выделить всё
Jan 21 11:50:37 srv-m1 mpd: PPTP: Incoming control connection from 10.50.10.200 1345 to 10.50.10.1 1723
Jan 21 11:50:37 srv-m1 mpd: pptp0: attached to connection with 10.50.10.200 1345
Jan 21 11:50:37 srv-m1 mpd: [pptp0] Accepting PPTP connection
Jan 21 11:50:37 srv-m1 mpd: [pptp0] opening link "pptp0"...
Jan 21 11:50:37 srv-m1 mpd: [pptp0] link: OPEN event
Jan 21 11:50:37 srv-m1 mpd: [pptp0] LCP: Open event
Jan 21 11:50:37 srv-m1 mpd: [pptp0] LCP: state change Initial --> Starting
Jan 21 11:50:37 srv-m1 mpd: [pptp0] LCP: LayerStart
Jan 21 11:50:37 srv-m1 mpd: [pptp0] PPTP: attaching to peer's outgoing call
Jan 21 11:50:37 srv-m1 mpd: [pptp0] link: UP event
Jan 21 11:50:37 srv-m1 mpd: [pptp0] link: origination is remote
Jan 21 11:50:37 srv-m1 mpd: [pptp0] LCP: Up event
Jan 21 11:50:37 srv-m1 mpd: [pptp0] LCP: state change Starting --> Req-Sent
Jan 21 11:50:37 srv-m1 mpd: [pptp0] LCP: SendConfigReq #5
Jan 21 11:50:37 srv-m1 mpd: ACFCOMP
Jan 21 11:50:37 srv-m1 mpd: PROTOCOMP
Jan 21 11:50:37 srv-m1 mpd: MRU 1500
Jan 21 11:50:37 srv-m1 mpd: MAGICNUM 9ef02b8a
Jan 21 11:50:37 srv-m1 mpd: AUTHPROTO CHAP MSOFTv2
Jan 21 11:50:37 srv-m1 mpd: MP MRRU 1600
Jan 21 11:50:37 srv-m1 mpd: MP SHORTSEQ
Jan 21 11:50:37 srv-m1 mpd: ENDPOINTDISC [802.1] 00 0d 88 39 2d 67
Jan 21 11:50:37 srv-m1 mpd: [pptp0] LCP: rec'd Configure Request #0 (Req-Sent)
Jan 21 11:50:37 srv-m1 mpd: MRU 1400
Jan 21 11:50:37 srv-m1 mpd: MAGICNUM 6fbf292a
Jan 21 11:50:37 srv-m1 mpd: PROTOCOMP
Jan 21 11:50:37 srv-m1 mpd: ACFCOMP
Jan 21 11:50:37 srv-m1 mpd: CALLBACK 6
Jan 21 11:50:37 srv-m1 mpd: [pptp0] LCP: SendConfigRej #0
Jan 21 11:50:37 srv-m1 mpd: CALLBACK 6
Jan 21 11:50:37 srv-m1 mpd: [pptp0] LCP: rec'd Configure Request #1 (Req-Sent)
Jan 21 11:50:37 srv-m1 mpd: MRU 1400
Jan 21 11:50:37 srv-m1 mpd: MAGICNUM 6fbf292a
Jan 21 11:50:37 srv-m1 mpd: PROTOCOMP
Jan 21 11:50:37 srv-m1 mpd: ACFCOMP
Jan 21 11:50:37 srv-m1 mpd: [pptp0] LCP: SendConfigAck #1
Jan 21 11:50:37 srv-m1 mpd: MRU 1400
Jan 21 11:50:37 srv-m1 mpd: MAGICNUM 6fbf292a
Jan 21 11:50:37 srv-m1 mpd: PROTOCOMP
Jan 21 11:50:37 srv-m1 mpd: ACFCOMP
Jan 21 11:50:37 srv-m1 mpd: [pptp0] LCP: state change Req-Sent --> Ack-Sent
Jan 21 11:50:39 srv-m1 mpd: [pptp0] LCP: SendConfigReq #6
Jan 21 11:50:39 srv-m1 mpd: ACFCOMP
Jan 21 11:50:39 srv-m1 mpd: PROTOCOMP
Jan 21 11:50:39 srv-m1 mpd: MRU 1500
Jan 21 11:50:39 srv-m1 mpd: MAGICNUM 9ef02b8a
Jan 21 11:50:39 srv-m1 mpd: AUTHPROTO CHAP MSOFTv2
Jan 21 11:50:39 srv-m1 mpd: MP MRRU 1600
Jan 21 11:50:39 srv-m1 mpd: MP SHORTSEQ
Jan 21 11:50:39 srv-m1 mpd: ENDPOINTDISC [802.1] 00 0d 88 39 2d 67
Jan 21 11:50:39 srv-m1 mpd: [pptp0] LCP: rec'd Configure Reject #6 (Ack-Sent)
Jan 21 11:50:39 srv-m1 mpd: MP MRRU 1600
Jan 21 11:50:39 srv-m1 mpd: MP SHORTSEQ
Jan 21 11:50:39 srv-m1 mpd: ENDPOINTDISC [802.1] 00 0d 88 39 2d 67
Jan 21 11:50:39 srv-m1 mpd: [pptp0] LCP: SendConfigReq #7
Jan 21 11:50:39 srv-m1 mpd: ACFCOMP
Jan 21 11:50:39 srv-m1 mpd: PROTOCOMP
Jan 21 11:50:39 srv-m1 mpd: MRU 1500
Jan 21 11:50:39 srv-m1 mpd: MAGICNUM 9ef02b8a
Jan 21 11:50:39 srv-m1 mpd: AUTHPROTO CHAP MSOFTv2
Jan 21 11:50:39 srv-m1 mpd: [pptp0] LCP: rec'd Configure Ack #7 (Ack-Sent)
Jan 21 11:50:39 srv-m1 mpd: ACFCOMP
Jan 21 11:50:39 srv-m1 mpd: PROTOCOMP
Jan 21 11:50:39 srv-m1 mpd: MRU 1500
Jan 21 11:50:39 srv-m1 mpd: MAGICNUM 9ef02b8a
Jan 21 11:50:39 srv-m1 mpd: AUTHPROTO CHAP MSOFTv2
Jan 21 11:50:39 srv-m1 mpd: [pptp0] LCP: state change Ack-Sent --> Opened
Jan 21 11:50:39 srv-m1 mpd: [pptp0] LCP: auth: peer wants nothing, I want CHAP
Jan 21 11:50:39 srv-m1 mpd: [pptp0] CHAP: sending CHALLENGE len:17
Jan 21 11:50:39 srv-m1 mpd: [pptp0] LCP: LayerUp
Jan 21 11:50:39 srv-m1 mpd: [pptp0] LCP: rec'd Ident #2 (Opened)
Jan 21 11:50:39 srv-m1 mpd: MESG: MSRASV5.10
Jan 21 11:50:39 srv-m1 mpd: [pptp0] LCP: rec'd Ident #3 (Opened)
Jan 21 11:50:39 srv-m1 mpd: MESG: MSRAS-0-MICROSOF-80B323
Jan 21 11:50:39 srv-m1 mpd: [pptp0] CHAP: rec'd RESPONSE #1
Jan 21 11:50:39 srv-m1 mpd: Name: "testuser"
Jan 21 11:50:39 srv-m1 mpd: [pptp0] AUTH: Auth-Thread started
Jan 21 11:50:39 srv-m1 mpd: [pptp0] AUTH: Trying RADIUS
Jan 21 11:50:39 srv-m1 mpd: [pptp0] RADIUS: RadiusAuthenticate for: testuser
Jan 21 11:50:39 srv-m1 mpd: [pptp0] RADIUS: rec'd RAD_ACCESS_ACCEPT for user testuser
Jan 21 11:50:39 srv-m1 mpd: [pptp0] AUTH: RADIUS returned authenticated
Jan 21 11:50:39 srv-m1 mpd: [pptp0] AUTH: Auth-Thread finished normally
Jan 21 11:50:39 srv-m1 mpd: [pptp0] CHAP: ChapInputFinish: status authenticated
Jan 21 11:50:39 srv-m1 mpd: Reply message: S=321511655A5CF10095E600EACDB77EC767A8B107
Jan 21 11:50:39 srv-m1 mpd: [pptp0] CHAP: sending SUCCESS len:42
Jan 21 11:50:39 srv-m1 mpd: [pptp0] LCP: authorization successful
Jan 21 11:50:39 srv-m1 mpd: [pptp0] Bundle up: 1 link, total bandwidth 64000 bps
Jan 21 11:50:39 srv-m1 mpd: [pptp0] IPCP: Open event
Jan 21 11:50:39 srv-m1 mpd: [pptp0] IPCP: state change Initial --> Starting
Jan 21 11:50:39 srv-m1 mpd: [pptp0] IPCP: LayerStart
Jan 21 11:50:39 srv-m1 mpd: [pptp0] CCP: Open event
Jan 21 11:50:39 srv-m1 mpd: [pptp0] CCP: state change Initial --> Starting
Jan 21 11:50:39 srv-m1 mpd: [pptp0] CCP: LayerStart
Jan 21 11:50:39 srv-m1 mpd: [pptp0] IPCP: Up event
Jan 21 11:50:39 srv-m1 mpd: [pptp0] IPCP: state change Starting --> Req-Sent
Jan 21 11:50:39 srv-m1 mpd: [pptp0] IPCP: SendConfigReq #3
Jan 21 11:50:39 srv-m1 mpd: IPADDR 172.16.0.254
Jan 21 11:50:39 srv-m1 mpd: COMPPROTO VJCOMP, 16 comp. channels, no comp-cid
Jan 21 11:50:39 srv-m1 mpd: [pptp0] CCP: Up event
Jan 21 11:50:39 srv-m1 mpd: [pptp0] CCP: state change Starting --> Req-Sent
Jan 21 11:50:39 srv-m1 mpd: [pptp0] CCP: SendConfigReq #3
Jan 21 11:50:39 srv-m1 mpd: [pptp0] CCP: rec'd Configure Request #4 (Req-Sent)
Jan 21 11:50:39 srv-m1 mpd: MPPC
Jan 21 11:50:39 srv-m1 mpd: 0x01000001:MPPC, stateless
Jan 21 11:50:39 srv-m1 mpd: [pptp0] CCP: SendConfigNak #4
Jan 21 11:50:39 srv-m1 mpd: MPPC
Jan 21 11:50:39 srv-m1 mpd: 0x000000e0:MPPE(40, 56, 128 bits),
Jan 21 11:50:39 srv-m1 mpd: [pptp0] IPCP: rec'd Configure Request #5 (Req-Sent)
Jan 21 11:50:39 srv-m1 mpd: IPADDR 0.0.0.0
Jan 21 11:50:39 srv-m1 mpd: NAKing with 172.16.0.45
Jan 21 11:50:39 srv-m1 mpd: PRIDNS 0.0.0.0
Jan 21 11:50:39 srv-m1 mpd: PRINBNS 0.0.0.0
Jan 21 11:50:39 srv-m1 mpd: SECDNS 0.0.0.0
Jan 21 11:50:39 srv-m1 mpd: SECNBNS 0.0.0.0
Jan 21 11:50:39 srv-m1 mpd: [pptp0] IPCP: SendConfigRej #5
Jan 21 11:50:39 srv-m1 mpd: PRIDNS 0.0.0.0
Jan 21 11:50:39 srv-m1 mpd: PRINBNS 0.0.0.0
Jan 21 11:50:39 srv-m1 mpd: SECDNS 0.0.0.0
Jan 21 11:50:39 srv-m1 mpd: SECNBNS 0.0.0.0
Jan 21 11:50:39 srv-m1 mpd: [pptp0] IPCP: rec'd Configure Reject #3 (Req-Sent)
Jan 21 11:50:39 srv-m1 mpd: COMPPROTO VJCOMP, 16 comp. channels, no comp-cid
Jan 21 11:50:39 srv-m1 mpd: [pptp0] IPCP: SendConfigReq #4
Jan 21 11:50:39 srv-m1 mpd: IPADDR 172.16.0.254
Jan 21 11:50:39 srv-m1 mpd: [pptp0] CCP: rec'd Configure Ack #3 (Req-Sent)
Jan 21 11:50:39 srv-m1 mpd: [pptp0] CCP: state change Req-Sent --> Ack-Rcvd
Jan 21 11:50:39 srv-m1 mpd: [pptp0] CCP: rec'd Configure Request #6 (Ack-Rcvd)
Jan 21 11:50:39 srv-m1 mpd: MPPC
Jan 21 11:50:39 srv-m1 mpd: 0x00000040:MPPE(128 bits),
Jan 21 11:50:39 srv-m1 mpd: [pptp0] CCP: SendConfigAck #6
Jan 21 11:50:39 srv-m1 mpd: MPPC
Jan 21 11:50:39 srv-m1 mpd: 0x00000040:MPPE(128 bits),
Jan 21 11:50:39 srv-m1 mpd: [pptp0] CCP: state change Ack-Rcvd --> Opened
Jan 21 11:50:39 srv-m1 mpd: [pptp0] CCP: LayerUp
Jan 21 11:50:39 srv-m1 mpd: [pptp0] CCP: encryption required, but MPPE was not negotiated in both directions
Jan 21 11:50:39 srv-m1 mpd: [pptp0] CCP: failed to negotiate required encryption
Jan 21 11:50:39 srv-m1 mpd: [pptp0] CCP: state change Opened --> Stopping
Jan 21 11:50:39 srv-m1 mpd: [pptp0] CCP: SendTerminateReq #4
Jan 21 11:50:39 srv-m1 mpd: [pptp0] CCP: LayerDown
Jan 21 11:50:39 srv-m1 mpd: [pptp0] IPCP: failed to negotiate required encryption
Jan 21 11:50:39 srv-m1 mpd: [pptp0] IPCP: state change Req-Sent --> Stopped
Jan 21 11:50:39 srv-m1 mpd: [pptp0] IPCP: LayerFinish
Jan 21 11:50:39 srv-m1 mpd: [pptp0] No NCPs left. Closing links...
Jan 21 11:50:39 srv-m1 mpd: [pptp0] closing link "pptp0"...
Jan 21 11:50:39 srv-m1 mpd: [pptp0] IPV6CP: failed to negotiate required encryption
Jan 21 11:50:39 srv-m1 mpd: [pptp0] IPCP: rec'd Configure Request #7 (Stopped)
Jan 21 11:50:39 srv-m1 mpd: IPADDR 0.0.0.0
Jan 21 11:50:39 srv-m1 mpd: NAKing with 172.16.0.45
Jan 21 11:50:39 srv-m1 mpd: [pptp0] IPCP: LayerStart
Jan 21 11:50:39 srv-m1 mpd: [pptp0] IPCP: SendConfigReq #5
Jan 21 11:50:39 srv-m1 mpd: IPADDR 172.16.0.254
Jan 21 11:50:39 srv-m1 mpd: COMPPROTO VJCOMP, 16 comp. channels, no comp-cid
Jan 21 11:50:39 srv-m1 mpd: [pptp0] IPCP: SendConfigNak #7
Jan 21 11:50:39 srv-m1 mpd: IPADDR 172.16.0.45
Jan 21 11:50:39 srv-m1 mpd: [pptp0] IPCP: state change Stopped --> Req-Sent
Jan 21 11:50:39 srv-m1 mpd: [pptp0] link: CLOSE event
Jan 21 11:50:39 srv-m1 mpd: [pptp0] LCP: Close event
Jan 21 11:50:39 srv-m1 mpd: [pptp0] LCP: state change Opened --> Closing
Jan 21 11:50:39 srv-m1 mpd: [pptp0] AUTH: Accounting data for user testuser: 2 seconds, 322 octets in, 341 octets out
Jan 21 11:50:39 srv-m1 mpd: [pptp0] Bundle up: 0 links, total bandwidth 9600 bps
Jan 21 11:50:39 srv-m1 mpd: [pptp0] IPCP: Close event
Jan 21 11:50:39 srv-m1 mpd: [pptp0] IPCP: state change Req-Sent --> Closing
Jan 21 11:50:39 srv-m1 mpd: [pptp0] IPCP: SendTerminateReq #6
Jan 21 11:50:39 srv-m1 mpd: [pptp0] error writing len 8 frame to bypass: Network is down
Jan 21 11:50:39 srv-m1 mpd: [pptp0] CCP: Close event
Jan 21 11:50:39 srv-m1 mpd: [pptp0] CCP: state change Stopping --> Closing
Jan 21 11:50:39 srv-m1 mpd: [pptp0] IPCP: Down event
Jan 21 11:50:39 srv-m1 mpd: [pptp0] IPCP: LayerFinish
Jan 21 11:50:39 srv-m1 mpd: [pptp0] No NCPs left. Closing links...
Jan 21 11:50:39 srv-m1 mpd: [pptp0] closing link "pptp0"...
Jan 21 11:50:39 srv-m1 mpd: [pptp0] IPCP: state change Closing --> Initial
Jan 21 11:50:39 srv-m1 mpd: [pptp0] CCP: Down event
Jan 21 11:50:39 srv-m1 mpd: [pptp0] CCP: LayerFinish
Jan 21 11:50:39 srv-m1 mpd: [pptp0] CCP: state change Closing --> Initial
Jan 21 11:50:39 srv-m1 mpd: [pptp0] AUTH: Cleanup
Jan 21 11:50:39 srv-m1 mpd: [pptp0] LCP: SendTerminateReq #8
Jan 21 11:50:39 srv-m1 mpd: [pptp0] LCP: LayerDown
Jan 21 11:50:39 srv-m1 mpd: [pptp0] link: CLOSE event
Jan 21 11:50:39 srv-m1 mpd: [pptp0] LCP: Close event
Jan 21 11:50:39 srv-m1 mpd: [pptp0] rec'd proto IPCP during terminate phase
Jan 21 11:50:39 srv-m1 mpd: [pptp0] rec'd proto CCP during terminate phase
Jan 21 11:50:39 srv-m1 mpd: [pptp0] rec'd proto IPCP during terminate phase
Jan 21 11:50:39 srv-m1 mpd: [pptp0] LCP: rec'd Terminate Ack #8 (Closing)
Jan 21 11:50:39 srv-m1 mpd: [pptp0] LCP: state change Closing --> Closed
Jan 21 11:50:39 srv-m1 mpd: [pptp0] LCP: LayerFinish
Jan 21 11:50:39 srv-m1 mpd: pptp0-0: clearing call
Jan 21 11:50:39 srv-m1 mpd: pptp0-0: killing channel
Jan 21 11:50:39 srv-m1 mpd: [pptp0] PPTP call terminated
Jan 21 11:50:39 srv-m1 mpd: [pptp0] link: DOWN event
Jan 21 11:50:39 srv-m1 mpd: [pptp0] LCP: Down event
Jan 21 11:50:39 srv-m1 mpd: [pptp0] LCP: state change Closed --> Initial
Jan 21 11:50:39 srv-m1 mpd: pptp0: closing connection with 10.50.10.200 1345
Jan 21 11:50:39 srv-m1 mpd: pptp0: CID 0x741b in SetLinkInfo not found
Jan 21 11:50:39 srv-m1 mpd: pptp0: killing connection with 10.50.10.200 1345
mpd.conf:
Код: Выделить всё
startup:
set web ip 10.50.10.1
set web port 5006
set web user admin pass
set web open
default:
load pptp0
load pptp1
load pptp2
pptp0:
new -i ng00 pptp0 pptp0
set ipcp ranges 172.16.0.254/32 172.16.0.1/16
load pptp_standart
pptp1:
new -i ng01 pptp1 pptp1
set ipcp ranges 172.16.0.254/32 172.16.0.2/16
load pptp_standart
pptp2:
new -i ng02 pptp2 pptp2
set ipcp ranges 172.16.0.254/32 172.16.0.3/16
load pptp_standart
pptp_standart:
set link yes acfcomp protocomp
set link no pap chap
set link enable chap
set link accept chap-msv1 chap-msv2
set link keep-alive 60 180
set bundle enable multilink
set bundle enable compression
set bundle yes crypt-reqd
set ccp accept mppc
set ccp yes mpp-e40
set ccp yes mpp-e56
set ccp yes mpp-e128
#set ipcp yes vjcomp
#set ipcp dns primary 172.16.0.254 172.16.0.253
#set ipcp accept req-pri-dns req-sec-dns
set iface disable on-demand
set iface enable tcpmssfix
set iface mtu 1300
#set pptp self 10.50.10.1
# RADIUS
set radius server 127.0.0.1 secret 1812 1813
set radius timeout 3
set radius retries 3
set auth enable radius-auth
#set auth enable radius-acct
set auth disable internal
#set auth acct-update 10
set radius enable message-authentic
Код: Выделить всё
pptp0:
set link type pptp
set pptp enable incoming
set pptp disable originate
pptp1:
set link type pptp
set pptp enable incoming
set pptp disable originate
pptp2:
set link type pptp
set pptp enable incoming
set pptp disable originate
Код: Выделить всё
Id Refs Address Size Name
1 11 0xc0400000 7c79dc kernel
2 1 0xc0bc8000 5c838 acpi.ko
3 1 0xc6c76000 2d000 pf.ko
4 1 0xc6fa1000 4000 ng_socket.ko
5 6 0xc6fa5000 a000 netgraph.ko
6 1 0xc6fb8000 3000 ng_iface.ko
7 1 0xc6fbb000 6000 ng_ppp.ko
8 1 0xc6fd0000 4000 ng_pptpgre.ko
9 1 0xc6fd4000 4000 ng_ksocket.ko
10 1 0xc6fd8000 2000 ng_tcpmss.ko
-
- Сообщения: 1612
- Зарегистрирован: Пт ноя 10, 2006 15:23
vel писал(а):сейчас буду крутить ещё NetFlow.
Код: Выделить всё
startup:
set netflow peer 192.168.1.1 9996
set netflow timeouts 10 10
pptp_server:
set iface enable netflow-in
set iface enable netflow-out
MPDXart писал(а):Не подскажеж, какими всетаки радиус-атрибутами ты обрезаешь скорость и на мпд и на циске?vel писал(а):Разобрался, устанавливаю MPD, но с ним жуткие проблемы.
Код: Выделить всё
Vendor: Attr: Значение:
12341 7 in#1=all shape 512000 96000 pass
12341 7 out#1=all shape 512000 96000 pass
Код: Выделить всё
Vendor: Attr: Значение:
9 1 lcp:interface-config#1=rate-limit output 512000 96000 96000 conform-action transmit exceed-action drop
9 1 lcp:interface-config#1=rate-limit input 512000 96000 96000 conform-action transmit exceed-action drop
Чтобы не городить огород и не нагружать процессор циски можно так:
на циске:
В утм:
id вендора - 9, id атрибута - 1 для in и для out
И короче lcp:interface-config#1=rate-limit output 512000 96000 96000 conform-action transmit exceed-action drop и циску не грузит так сильно 
на циске:
Код: Выделить всё
class-map match-all internal.in
match access-group name internal.in
class-map match-all internal.out
match access-group name internal.out
!
!
policy-map 1024in
class internal.in
police rate 1024000 burst 64000 peak-burst 64000
conform-action transmit
exceed-action drop
violate-action drop
class class-default
police rate 1024000 burst 64000 peak-burst 64000
conform-action transmit
exceed-action drop
violate-action drop
!
policy-map 1024out
class internal.out
police rate 1024000 burst 64000 peak-burst 64000
conform-action transmit
exceed-action drop
violate-action drop
class class-default
police rate 1024000 burst 64000 peak-burst 64000
conform-action transmit
exceed-action drop
violate-action drop
id вендора - 9, id атрибута - 1 для in и для out
Код: Выделить всё
Vendor: Attr: Значение:
9 1 ip:sub-qos-policy-in=1024in
9 1 ip:sub-qos-policy-out=1024out

[quote="TiRider"]Чтобы не городить огород и не нагружать процессор циски можно так:
В утм:
id вендора - 9, id атрибута - 1 для in и для out
[code]
Vendor: Attr: Значение:
9 1 ip:sub-qos-policy-in=1024in
9 1 ip:sub-qos-policy-out=1024out
[/code]
[/quote]
Бьюсь над этим - не применяется policy никак, c7200-a3jk91s-mz.122-31.sb2... через 250-й атрибут QU;2048000;D;2048000 - работает,
но мне нужно 2 скорости на клиента - локальную и инет.
В утм:
id вендора - 9, id атрибута - 1 для in и для out
[code]
Vendor: Attr: Значение:
9 1 ip:sub-qos-policy-in=1024in
9 1 ip:sub-qos-policy-out=1024out
[/code]
[/quote]
Бьюсь над этим - не применяется policy никак, c7200-a3jk91s-mz.122-31.sb2... через 250-й атрибут QU;2048000;D;2048000 - работает,
но мне нужно 2 скорости на клиента - локальную и инет.
[quote="TiRider"]Странно. У меня работает. А да я забыл указать, маркировку трафика делаю локального. Удается в обход впн и пресловутых политик скорости..[/quote]
Да удалось привязать(как всегда очепятки), и даже классификация работает, только вот скорости при policy мягко говоря неадекватны, или что я забыл ?
policy-map c2048.out
class local
police rate 4096000 burst 64000
conform-action transmit
exceed-action drop
violate-action drop
class inet.out
police rate 2048000 burst 64000
conform-action transmit
exceed-action drop
violate-action drop
class class-default
police rate 4096000 burst 64000
conform-action transmit
exceed-action drop
violate-action drop
при sh policy-map int все честно, траффик попадает в свои классы, но скорости по speedtest - завышены, по iperf - занижены. Впридачу рвутся коннекты icq,ftp....
Да удалось привязать(как всегда очепятки), и даже классификация работает, только вот скорости при policy мягко говоря неадекватны, или что я забыл ?
policy-map c2048.out
class local
police rate 4096000 burst 64000
conform-action transmit
exceed-action drop
violate-action drop
class inet.out
police rate 2048000 burst 64000
conform-action transmit
exceed-action drop
violate-action drop
class class-default
police rate 4096000 burst 64000
conform-action transmit
exceed-action drop
violate-action drop
при sh policy-map int все честно, траффик попадает в свои классы, но скорости по speedtest - завышены, по iperf - занижены. Впридачу рвутся коннекты icq,ftp....
postfix писал(а): Каким образом маркируте трафик?
Код: Выделить всё
route-map Mark_Traf permit 10
set ip precedence priority
!
route-map Mark_City permit 10
set ip precedence priority
!
route-map Mark_Inet permit 10
set ip precedence immediate